Is Google Biotech Team Overreaching?
September 9, 2016
Science reality is often inspired by science fiction, and Google’s biotech research division, Verily Life Sciences, is no exception. Business Insider posts, “‘Silicon Valley Arrogance’? Google Misfires as It Strives to Turn Star Trek Fiction Into Reality.” The “Star Trek” reference points to Verily’s Tricorder project, announced three years ago, which set out to create a cancer-early-detection device. Sadly, that hopeful venture may be sputtering out. STAT reporter Charles Piller writes:
Recently departed employees said the prototype didn’t work as hoped, and the Tricorder project is floundering. Tricorder is not the only misfire for Google’s ambitious and extravagantly funded biotech venture, now named Verily Life Sciences. It has announced three signature projects meant to transform medicine, and a STAT examination found that all of them are plagued by serious, if not fatal, scientific shortcomings, even as Verily has vigorously promoted their promise.
Piller cites two projects, besides the Tricorder, that underwhelm. We’re told that independent experts are dubious about the development of a smart contact lens that can detect glucose levels for diabetics. Then there is the very expensive Baseline study—an attempt to define what it means to be healthy and to catch diseases earlier—which critics call “lofty” and “far-fetched.” Not surprisingly, Google being Google, there are also some privacy concerns being raised about the data being collected to feed the study.
There are several criticisms and specific examples in the lengthy article, and interested readers should check it out. There seems to be one central notion, though— that Verily Live Sciences is attempting to approach the human body like a computer when medicine is much, much more complicated than that. The impressive roster of medical researchers on the team seems to provide little solace to critics. The write-up relates:
It’s axiomatic in Silicon Valley’s tech companies that if the math and the coding can be done, the product can be made. But seven former Verily employees said the company’s leadership often seems not to grasp the reality that biology can be more complex and less predictable than computers. They said Conrad, who has a PhD in anatomy and cell biology, applies the confident impatience of computer engineering, along with extravagant hype, to biotech ideas that demand rigorous peer review and years or decades of painstaking work.
Are former employees the most objective source? I suspect ex-Googlers and third-party scientists are underestimating Google. The company has a history of reaching the moon by shooting for the stars, and for enduring a few failures as a price of success. I would not be surprised to see Google emerge on top of the biotech field. (As sci fi fans know, biotech is the medicine of the future. You’ll see.) The real question is how the company will treat privacy, data rights, and patient safety along the way.
Cynthia Murrell, September 9, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
Palantir: More Legal Excitement
September 6, 2016
One of the Beyond Search goslings directed my attention to a legal document “Palantir Technologies Inc. (”Palantir”) Sues Defendants Marc L. Abramowitz…” The 20 page complaint asserts that a Palantir investor sucked in proprietary information and then used that information outside the boundaries of Sillycon Valley norms of behavior. These norms apply to the one percent of the one percent in my opinion.
The legal “complaint” points to several patent documents which embodied Palantir’s proprietary information. The documents require that one use the Justia system to locate; specifically, Provisional Application No. 62/072,36 Provisional Application No. 62/066,716, and Provisional Application No. 62/094,888. These provisional applications, I concluded, reveal that Palantir seeks to enter insurance and health care type markets. This information appears to put Palantir Technologies at a competitive disadvantage.
Who is the individual named in the complaint?
Marc Abramowitz, who is associated with an outfit named KT4. KT4 does not have much of an online presence. The sparse information available to me about Abramowitz is that he is a Harvard trained lawyer and connected to Stanford’s Hoover econo-think unit. Abramowitz’s link to Palantir is that he invested in the company and made visits to the Hobbits’ Palo Alto “shire” part of his work routine.
Despite the legalese, the annoyance of Palantir with Abramowitz seeps through the sentences.
For me what is interesting is that IBM i2 asserted several years ago that Palantir Technologies improperly tapped into proprietary methods used in the Analyst’s Notebook software product and system. See “i2 and Palantir: Resolved Quietly.”
One new twist is that the Palantir complaint against Abramowitz includes a reference to Abramowitz’s appropriating of the word “Shire.” If you are not in the know in Sillycon Valley, Palantir has referenced its offices as the shire; that is, the firm’s office in Palo Alto.
When I read the document, I did not spot a reference to Hobbits or seeing stones.
When I checked this morning (September 6, 2016), the document was still publicly accessible at the link above. However, Palantir’s complaint about the US Army’s procurement system was sealed shortly after it was filed. This Abramowitz complaint may go away for some folks as well. If you can’t locate the Abramowitz document, you will have to up your legal research game. My hunch is that neither Palantir or Mr. Abramowitz will respond to your request for a copy.
There are several hypothetical, Tolkienesque cyclones from this dust up between and investor and the Palantir outfit, which is alleged to be a mythical unicorn:
- Trust seems to need a more precise definition when dealing with either Palantir and Abramowitz
- Some folks use Tolkein’s jargon and don’t want anyone else to “horn” in on this appropriation
- Filing patents on relatively narrow “new” concepts when one does not have a software engineering track record goes against the accepted norms of innovation
- IBM i2’s team may await the trajectory of this Abramowitz manner more attentively than the next IBM Watson marketing innovation.
Worth monitoring just for the irony molecules in this Palantir complaint. WWTK or What would Tolkien think? Perhaps a quick check of the seeing stone is appropriate.
Stephen E Arnold, September 6, 2016
Government Seeks Sentiment Analysis on Its PR Efforts
September 6, 2016
Sentiment analysis is taking off — government agencies are using it for PR purposes. Next Gov released a story, Spy Agency Wants Tech that Shows How Well Its PR Team Is Doing, which covers the National Geospatial-Intelligence Agency’s request for information about sentiment analysis. The NGA hopes to use this technology to assess their PR efforts to increase public awareness of their agency and communicate its mission, especially to groups such as college students, recruits and those in the private sector. Commenting on the bigger picture, the author writes,
The request for information appears to be part of a broader effort within the intelligence community to improve public opinion about its operations, especially among younger, tech-savvy citizens. The CIA has been using Twitter since 2014 to inform the public about the agency’s past missions and to demonstrate that it has a sense of humor, according to an Nextgov interview last year with its social media team. The CIA’s social media director said at the time there weren’t plans to use sentiment analysis technology to analyze the public’s tweets about the CIA because it was unclear how accurate those systems are.
The technologies used in sentiment analysis such as natural language processing and computational linguistics are attractive in many sectors for PR and other purposes, the government is no exception. Especially now that CIA and other organizations are using social media, the space is certainly ripe for government sentiment analysis. Though, we must echo the question posed by the CIA’s social media director in regards to accuracy.
Megan Feil, September 6, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph There is a Louisville, Kentucky Hidden Web/DarkWeb meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/
Google Enables Users to Delete Search History, Piece by Piece
August 31, 2016
The article on CIO titled Google Quietly Brings Forgetting to the U.S. draws attention to Google have enabled Americans to view and edit their search history. Simply visit My Activity and login to witness the mind-boggling amount of data Google has collected in your search career. To delete, all you have to do is complete two clicks. But the article points out that to delete a lot of searches, you will need an afternoon dedicated to cleaning up your history. And afterward you might find that your searches are less customized, as are your ads and autofills. But the article emphasizes a more communal concern,
There’s something else to consider here, though, and this has societal implications. Google’s forget policy has some key right-to-know overlaps with its takedown policy. The takedown policy allows people to request that stories about or images of them be removed from the database. The forget policy allows the user to decide on his own to delete something…I like being able to edit my history, but I am painfully aware that allowing the worst among us to do the same can have undesired consequences.
Of course, by “the worse among us” he means terrorists. But for many people, the right to privacy is more important than the hypothetical ways that terrorists will potentially suffer within a more totalitarian, Big Brother state. Indeed, Google’s claim that the search history information is entirely private is already suspect. If Google personnel or Google partners can see this data, doesn’t that mean it is no longer private?
Chelsea Kerwin, August 31, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Technical Debt and Technical Wealth
August 29, 2016
I read “Forget Technical Debt. Here’s How to Build Technical Wealth.” Lemons? Make lemonade. Works almost every time.
The write up begins with a reminder that recent code which is tough to improve is a version of legacy code. I understand. I highlighted this statement:
Legacy code isn’t a technical problem. It’s a communication problem.
I am not sure I understand. But let’s move forward in the write up. I noted this statement:
“It’s the law that says your codebase will mirror the communication structures across your organization. If you want to fix your legacy code, you can’t do it without also addressing operations, too. That’s the missing link that so many people miss.”—Andrea Goulet, CEO of Corgibytes
So what’s the fix for legacy code an an outfit like Delta Airlines or the US air traffic control system or the US Internal Revenue Service or a Web site crafted in 1995?
I highlighted this advice:
Forget debt, build technical wealth.
Very MBA-ish. I trust MBAs. Heck, I have affection for some, well, one or two. The mental orientation struck me as quite Wordsworthian:
Stop thinking about your software as a project. Start thinking about it as a house you will live in for a long time…
Just like with a house, modernization and upkeep happens in two ways: small, superficial changes (“I bought a new rug!”) and big, costly investments that will pay off over time (“I guess we’ll replace the plumbing…”). You have to think about both to keep your product current and your team running smoothly. This also requires budgeting ahead — if you don’t, those bigger purchases are going to hurt. Regular upkeep is the expected cost of home ownership. Shockingly, many companies don’t anticipate maintenance as the cost of doing business.
Okay, let’s think about legacy code in something like a “typical” airline or a “typical” agency of the US Executive Branch. Efforts have been made over the last 20 years to improve the systems. Yet these outfits, like many commercial enterprises, are a digital Joseph’s coat of many systems, software, hardware, systems, and methods. The idea is to keep the IRS up and running; that is, good enough to remain dry when it rains and pours.
There is, in my opinion, not enough money to “fix” the IRS systems. If there were money, the problem of code written by many hands over many years is intractable. The idea for “menders” is a good one. But where does one find enough menders to remediate the systems at a big outfit.
Google’s approach is to minimize “legacy” code in some situations. See “Google Is in a Vicious Build Retire Cycle.”
The MBA charts, graphs and checklists do not deliver wealth. The approach sidesteps a very important fact. There are legacy systems which, if they crash, are increasingly difficult to get back up and running. The thought of remediating the systems coded by folks long since retired or deceased is something that few people, including me, have a desire to contemplate. Legacy code is a problem, and there is no quick, easy, or business school thinking fix I know about.
Maybe somewhere? Maybe someplace? Just not in Harrod’s Creek.
Stephen E Arnold, August 29, 2016
Another Robot Finds a Library Home
August 23, 2016
Job automation has its benefits and downsides. Some of the benefits are that it frees workers up to take on other tasks, cost-effectiveness, efficiency, and quicker turn around. The downside is that it could take jobs and could take out the human factor in customer service. When it comes to libraries, automation and books/research appear to be the antithesis of each other. Automation, better known as robots, is invading libraries once again and people are up in arms that librarians are going to be replaced.
ArchImag.com shares the story “Robot Librarians Invade Libraries In Singapore” about how the A*Star Research library uses a robot to shelf read. If you are unfamiliar with library lingo, shelf reading means scanning the shelves to make sure all the books are in their proper order. The shelf reading robot has been dubbed AuRoSS. During the night AuRoSS scans books’ RFID tags, then generates a report about misplaced items. Humans are still needed to put materials back in order.
The fear, however, is that robots can fulfill the same role as a librarian. Attach a few robotic arms to AuRoSS and it could place the books in the proper places by itself. There already is a robot named Hugh answering reference questions:
New technologies thus seem to storm the libraries. Recall that one of the first librarian robots, Hugh could officially take his position at the university library in Aberystwyth, Wales, at the beginning of September 2016. Designed to meet the oral requests by students, he can tell them where the desired book is stored or show them on any shelf are the books on the topic that interests them.
It is going to happen. Robots are going to take over the tasks of some current jobs. Professional research and public libraries, however, will still need someone to teach people the proper way to use materials and find resources. It is not as easy as one would think.
Whitney Grace, August 23, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/
Technology That Literally Can Read Your Lips (Coming Soon)
August 19, 2016
The article on Inquisitr titled Emerging New “Lip-Reading” Technology To Radically Revolutionize Modern-Day Crime Solving explains the advances in visual speech recognition technology. In 1974 Gene Hackman could have used this technology in the classic film “The Conversation” where he plays a surveillance expert trying to get better audio surveillance in public settings where background noise makes clarity almost impossible. Apparently, we haven’t come very far since the 70s when it comes to audio speech recognition, but recent strides in lip reading technology in Norwich have experts excited. The article states,
“Lip-reading is one of the most challenging problems in artificial intelligence so it’s great to make progress on one of the trickier aspects, which is how to train machines to recognize the appearance and shape of human lips.” A few years ago German researchers at the Karlsruhe Institute of Technology claim they’ve introduced a lip-reading phone that allowed for soundless communication, a development that was to mark a massive leap forward into the future of speech technology.”
The article concludes that while progress has been made, there is still a great deal of ground to cover. The complications inherent in recognizing, isolating, and classifying lip movement patterns makes this work even more difficult than audio speech recognition, according to the article. At any rate, this is good news for some folks who want to “know” what is in a picture and what people say when there is no audio track.
The article concludes that while progress has been made, there is still a great deal of ground to cover. The complications inherent in recognizing, isolating, and classifying lip movement patterns makes this work even more difficult than audio speech recognition, according to the article. At any rate, this is good news for some folks who want to “know” what is in a picture and what people say when there is no audio track.
Chelsea Kerwin, August 19, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/
Superior Customer Service Promised through the Accenture Virtual Agent Amelia
August 17, 2016
The article titled Accenture Forms New Business Unit Around IPsoft’s Amelia AI Platform on ZDNet introduces Amelia as a virtual agent capable of providing services in industries such as banking, insurance, and travel. Amelia looks an awful lot like Ava from the film Ex Machina, wherein an AI robot manipulates a young programmer by appealing to his empathy. Similarly, Accenture’s Amelia is supposed to be far more expressive and empathetic than her kin in the female AI world such as Siri or Amazon’s Alexa. The article states,
“Accenture said it will develop a suite of go-to-market strategies and consulting services based off of the Amelia platform…the point is to appeal to executives who “are overwhelmed by the plethora of technologies and many products that are advertising AI or Cognitive capabilities”…For Accenture, the formation of the Amelia practice is the latest push by the company to establish a presence in the rapidly expanding AI market, which research firm IDC predicts will reach $9.2 billion by 2019.”
What’s that behind Amelia, you ask? Looks like a parade of consultants ready and willing to advise the hapless executives who are so overwhelmed by their options. The Amelia AI Platform is being positioned as a superior customer service agent who will usher in the era of digital employees.
Chelsea Kerwin, August 17, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/
SEO Is a Dirty Web Trick
August 17, 2016
Search engine optimization is the bane of Web experts. Why? If you know how to use it you can increase your rankings in search engines and drive more traffic to your pages, but if you are a novice at SEO you are screwed. Search Engine Land shares some bad SEO stories in “SEO Is As Dirty As Ever.”
SEO has a bad reputation in many people’s eyes, because it is viewed as a surreptitious way to increase traffic. However, if used correctly SEO is not only a nifty trick, but is a good tool. As with anything, however, it can go wrong. One bad SEO practice is using outdated techniques like keyword stuffing, copying and pasting text, and hidden text. Another common mistake is not having a noindex tag, blocking robots, JavaScript frameworks not being indexed.
Do not forget other shady techniques like the always famous shady sales, removing links, paid links, spam, link networks, removing links, building another Web site on a different domain, abusing review sites, and reusing content. One thing to remember is that:
“It’s not just local or niche companies that are doing bad things; in fact, enterprise and large websites can get away with murder compared to smaller sites. This encourages some of the worst practices I’ve ever seen, and some of these companies do practically everything search engines tell them not to do.”
Ugh! The pot is identifying another pot and complaining about its color and cleanliness.
Whitney Grace, August 17, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden /Dark Web meet up on August 23, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233019199/
Why Search Does Not Change Too Much: Tech Debt Is a Partial Answer
August 12, 2016
I read “The Human Cost of Tech Debt.” The write up picks up the theme about the amount of money needed to remediate engineering mistakes, bugs, and short cuts. The cost of keeping an original system in step with newer market entrants’ products adds another burden.
The write up is interesting and includes some original art. Even though the art is good, the information presented is better; for example:
For a manager, a code base high in technical debt means that feature delivery slows to a crawl, which creates a lot of frustration and awkward moments in conversation about business capability. For a developer, this frustration is even more acute. Nobody likes working with a significant handicap and being unproductive day after day, and that is exactly what this sort of codebase means for developers. Each day they go to the office knowing that it’s going to take the better part of a day to do something simple like add a checkbox to a form. They know that they’re going to have to manufacture endless explanations for why seemingly simple things take them a long time. When new developers are hired or consultants brought in, they know that they’re going to have to face confused looks, followed by those newbies trying to hide mild contempt.
My interest is search and content processing. I asked myself, “Why are search and retrieval systems better than they were in 1975. When I queried the RECON system, I was able to find specific documents which contained information matching the terms in my query. Four decades ago, I could generate a useful result set. The bummer was that the information appeared on weird thermal printer paper. But I usually found the answer to my question in a fraction of the time required for me to run a query on my Windows machine or my Mac.
What’s up?
My view is that search and retrieval tends to be a recycling business. The same basic systems and methods are used again and again. The innovations are wrappers. But to make search more user friendly, add ons look at a user’s query history and behind the scenes filter the results to match the history.
The shift to mobile has been translated to providing results that other people have found useful. Want a pizza? You can find one, but if you want Cuban food in Washington, DC, you may find that the mapping service does not include a popular restaurant for reasons which may be related to advertising expenditures.
We ran a series of queries across five Dark Web search and retrieval systems. None of the systems delivered high precision and high recall results. In order to find certain large sites, manual review and one-at-a-time clicking and review were needed to locate what we were querying.
Regular Web or Dark Web. Online search has discarded useful AND, OR, NOT functions, date and time stamps, and any concern about revealing editorial or filtering postures to a user.
Technological debt explains that most search outfits lack the money to deliver a Class A solution. What about the outfits with oodles of dough and plenty of programmers? The desire and need to improve search is not a management priority.
Some vendors mobile search operates from a vendor’s copy of the indexed sites. Easy, computationally less expensive, and good enough.
Tech debt is a partial explanation for the sad state of online search at this time.
Stephen E Arnold, August 12, 2016