Real Journalism Forks Real Humans
October 9, 2015
“AP’s Robot Journalists Are Writing Their Own Stories Now” suggests that wizards who suggest that automation creates jobs may want to outthink their ideas. Remember the good old days. The Associated Press, United Press International and other “we use humans” news gathering organizations hired people. Now some of the anecdotes about real journalists are derogatory. I never met a journalist who was inebriated at 9 30 am. Noon? Maybe?
In the write up, the Associated Press, which has a fascinating approach to its ownership, rolled out Automated Insights. The idea was that software filtered and assembled real news stories.
Well, how is that working out?
IBM’s CEO believes that automation will not decimate the work force. Gannett is making an effort to buy up more newspapers so these too can be tooled to the tolerances of the Louisville Courier Journal. Fine newspaper. Fine operation.
And the AP itself? Well, the accumulated loss continues to go up. I recall reading “Employment Rates Are Improving For Everyone But Journalism Majors.”
I noted this passage in a NASDAQ write up:
The prospect of technology-driven job destruction is a matter of great debate for many scientists, technologists, and economists, some of whom predict massive losses in the labor market. In the past, new technology has destroyed jobs and created new ones, but some experts wonder if the increasing power of information technology will leave relatively less and less for people to do.
Journalism majors, unemployed “real” journalists, and contract journalists once called stringers—life is only going to get better. Lyft will make it easier for some folks to become taxi drivers. There are plenty of jobs as data scientists, a profession eager for those who can write prose. There are also opportunities to become experts in search and content processing. Hey, words are words.
Stephen E Arnold, October 9, 2015
Attivio Does Data Dexterity
October 9, 2015
Enterprise search company Attivio has an interesting post in their Data Dexterity Blog titled “3 Questions for the CEO.” We tend to keep a close eye on industry leader Attivio, and for good reason. In this post, the company’s senior director of product marketing Jane Zupan posed a few questions to her CEO, Stephen Baker, about their role in the enterprise search market. Her first question has Baker explaining his vision for the field’s future, “search-based data discovery”; he states:
“With search-based data discovery, you would simply type a question in your natural language like you do when you perform a search in Google and get an answer. This type of search doesn’t require a visualization tool. So, for example, you could ask a question like ‘tell me what type of weather conditions which exist most of the time when I see a reduction in productivity in my oil wells.’ The answer that comes back, such as ‘snow,’ or ‘sleet,’ gives you insights into how weather patterns affect productivity. Right now, search can’t infer what a question means. They match the words in a query, or keywords, with words in a document. But [research firm] Gartner says that there is an increasing importance for an interface in BI tools that extend BI content creation, analysis and data discovery to non-skilled users. You don’t need to be familiar with the data or be a business analyst or data scientist. You can be anyone and simply ask a question in your words and have the search engine deliver the relevant set of documents.”
Yes, many of us are looking forward to that day. Will Attivio be the first to deliver? The interview goes on to discuss the meaning of the company’s slogan, “the data dexterity company.” Part of the answer involves gaining access to “dark data” buried within organizations’ data silos. Finally, Zupan asks what “sets Attivio apart?” Baker’s answers: the ability to quickly access data from more sources; deriving structure from and analyzing unstructured data; and friendliness to “non-technical” users.
Launched in 2008, Attivio is headquartered in Newton, Massachusetts. Their team includes folks with an advantageous combination of backgrounds: in search, database, and business intelligence companies.
Cynthia Murrell, October 9, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Savanna Offers Simplistic Search and Analytics
October 9, 2015
Thetus Corporation created Savanna, a collaborative all-source analysis platform based in a Web-browser. The company just released a brand new 4.5 upgrade to Savanna and it is guaranteed to keep users ahead of the competition with insightful information and business connections. Savanna 4.5 comes with some great improvements to search, upload and content management, and new ways to work with structured data. Virtual Strategy Magazine shares the details about the upgrade in “Savanna 4.5 Provides For Meaningful Analysis In Minutes.”
The most talked about feature in the upgrade is the new meaningful analysis:
“New avenues for structured data visualization in Savanna 4.5 allow analysts to uncover new connections between data, deepening their analysis and bringing new insights. The ongoing improvements to Savanna refine the analysis process by making it easy for analysts to search for and manage content, enhancing the overall Savanna experience. Licensed Savanna customers can expect new updates and enhancements on a regular basis.”
Also included in the upgrade is a more intuitive search layout with improved filters for content and source selection, more options to customize a timeline’s appearance, more options for structured data visualization, and integrated upload capabilities with faster upload and better classification.
Some of the new features are standard options in other analytics software, but Thetus has a good track for new business insights with its software.
Whitney Grace, October 9, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Stanford Offers an NLP Demo
October 8, 2015
Want to see what natural language processing does in the Stanford CoreNLP implementation. Navigate to Stanford CoreNLP. The service is free. Enter some text. The system will scan the input and display an output. NLP revealed:
What can one do with this output? Build an application around the outputs. NLP is easy. The artificial intelligence implementation is a bit of a challenge, of course, but parts of speech, named entities, and dependency parsing can be darned useful. Now mixed language inputs may be an issue. Names in different character sets could be a hurdle. I am thrilled that NLP has been visualized using the brat visualization and annotation software. Get excited, gentle reader.
Stephen E Arnold, October 8, 2015
Amazon Updates Sneaker Net
October 8, 2015
I remember creating a document, copying the file to a floppy, and then walking up one flight of steps to give the floppy to my boss. He took the floppy, loaded it into his computer, and made changes. A short time later he would walk down one flight of steps, hand me the floppy with his file on it, and I would review the changes.
I thought this was the cat’s pajamas for two reasons:
- I did not have to use ledger paper, sharpen a pencil, and cramp my fingers
- Multiple copies existed so I no longer had to panic when I spilled my Fresca across my desk.
Based on the baloney I read every day about the super wonderful high speed, real time cloud technology, I was shocked when I read “Snowball’s Chance in Hell? Amazon Just Launched a Physical Data Transfer Service.” The news struck me as more important than the yap and yammer about Amazon disrupting cloud business and adding partners.
Here’s the main point I highlighted in pragmatic black:
A Snowball device is ordered through the AWS Management Console and is delivered to site within a few days; customers can order multiple devices and devices can be run in parallel. Described as coming in its “own shipping container” (it doesn’t require packing or unpacking) the Snowball is entirely self-contained, complete with 110 Volt power, a 10 GB network connection on the back and an E Ink display/control panel on the front. Once received it’s simply a matter of plugging the device in, connecting it to a network, configuring the IP address, and installing the AWS Snowball client; a job manifest and 25 character unlock code complete the task. When the transfer of data is complete the device is disconnected and a shipping label will automatically appear on the E Ink display; once shipped back to Amazon (currently only the Oregon data center is supporting the service, with others to follow) the data will be decrypted and copied to S3 bucket(s) as specified by the customer.
There you go. Sneaker net updated with FedEx, UPS, or another shipping service. Definitely better than carrying an appliance up and down stairs. I was hoping that individuals participating in the Mechanical Turk system would be available to pick up an appliance and deliver it to the Amazon customer and then return the gizmo to Amazon. If Amazon can do Etsy-type stuff, it can certainly do Uber-type functions, right?
When will the future arrive? No word on how the appliance will interact with Amazon’s outstanding search system. I wish I knew how to NOT out unpublished books or locate mysteries by Japanese authors available in English. Hey, there is a sneaker net. Focus on the important innovations.
Stephen E Arnold, October 8, 2015
Another Categorical Affirmative: Nobody Wants to Invest in Search
October 8, 2015
Gentle readers, I read “Autonomy Poisoned the Well for Businesses Seeking VC Cash.” Keep in mind that I am capturing information which appeared in a UK publication. I find this type of essay interesting and entertaining. Will you? Beats me. One thing is certain. This topic will not be fodder for the LinkedIn discussion groups, the marketers hawking search and retrieval at conferences to several dozen fellow travelers, or in consultant reports promoting the almost unknown laborers in the information access vineyards.
Why not?
The problem with search reaches back a few years, but I will add a bit of historical commentary after I highlight what strikes me as the main point of the write up:
Nobody wants to invest in enterprise search, says startup head. Patrick White, Synata
Many enterprise search systems are a bit like the USS United States, once the slickest ocean liner in the world. The ship looks like a ship, but the effort involved in making it seaworthy is going to be project with a hefty price tag. Implementing enterprise search solutions are similar to this type of ocean-going effort.
There you go. “Nobody.” A categorical in the “category” of logic like “All men are mortal.” Remarkable because outfits like Attivio, Coveo, and Digital Reasoning, among others have received hefty injections of venture capital in recent memory.
The write up makes this interesting point:
“I think Autonomy really messed up [the space]”, and when investors hear ‘enterprise search for the cloud’ it “scares the crap out of them”, he added. “Autonomy has poisoned the well for search companies.” However, White added that Autonomy was just the most high profile example of cases that have scared off investors. “It is unfair just to blame Autonomy. Most VCs have at least one enterprise search in their portfolio. So VCs tend to be skittish about it,” he [added.
I am not sure I agree. Before there was Autonomy, there was Fulcrum Technologies. The company’s marketing literature is a fresh today as it was in the 1990s. The company was up, down, bought, and merged. The story of Fulcrum, at least up to 2009 or so is available at this link.
The hot and cold nature of search and content processing may be traced through the adventures of Convera (formerly Excalibur Technologies) and its relationships with Intel and the NBA, Delphes (a Canadian flame out), Entopia (a we can do it all), and, of course, Fast Search & Transfer.
Now Fast Search, like most old school search technology, is very much with us. For a dose of excitement one can have Search Technologies (founded by some Convera wizards) implement Fast Search (now owned by Microsoft).
Where Are the Former Big Six in Enterprise Search Vendors: 2004 and 2015
Autonomy, now owned by HP and mired in litigation over allegations of financial fraud
Convera, after struggles with Intel and NBA engagements, portions of the company were sold off. Essentially out of business. Alums are consultants.
Endeca, owned by Oracle and sold as an eCommerce and business intelligence service. Oracle gives away its own enterprise search system.
Exalead, owned by Dassault Systèmes and now marketed as a product component system. No visibility in the US.
Fast Search, owned by Microsoft and still available as a utility for SharePoint. The technology dates from the late 1990s. Brand is essentially low profiled at this time.
Verity, Autonomy purchased Verity and used its customer list for upsales and used the K2 technology as part of the sprawling IDOL suite.
Fast Search reported revenues which after an investigation and court procedure were found to be a bit enthusiastic. The founder of Fast Search was the subject of the Norwegian authorities’ attention. You can check out the news reports about the prohibition on work and the sentence handed down for the issues the authorities concluded warranted a slap on the wrist and a tap on the head.
The story of enterprise search has been efforts—sometimes Herculean—to sell information access companies. When a company sells like Vivisimo for about one year’s revenues or an estimated $20 million, there is a sense of getting that mythic task accomplished. IBM, like most of the other acquirers of search technology, try valiantly to convert a utility into something with revenue lift. As I watch the evolution of the lucky exits, my overall impression is that the purchasers realize that search is a utility function. Search can generate consulting and engineering fees, but the customers want more.
That realization leads to the wild and crazy hyper marketing for products like Hewlett Packard’s cloud version of Autonomy’s IDOL and DRE technology or IBM’s embrace of open source search and the wisdom of wrapping that core with functions.
Enterprise search, therefore, is alive and well within applications or solutions that are more directly related to something that speaks to senior managers; namely, making sales and reducing costs.
What’s the cost of making sure the controls for an enterprise search system are working and doing the job the licensee wants done?
The problem is the credit card debt load which Googlers explained quite clearly. Technology outfits, particularly information access players, need more money than it is possible for most firms to generate. This contributes to the crazy flips from search to police analysis, from looking up an entry in a data base to an assertion that customer support is enabled, hunting for an article in this blog is now real time, active business intelligence, or indexing by proper noun like White House morphs into natural language understanding of unstructured text.
Investments are flowing to firms which could be easily positioned as old school search and retrieval operations. Consider Lexmark, a former unit of IBM, and an employer of note not far from my pond filled with mine run off in Kentucky. The company, like Hewlett Packard, wants to find a way to replace its traditional business which was not working as planned as a unit of IBM. Lexmark bought Brainware, a company with patents on trigram methods and a good business for processing content related to legal matters. Lexmark is doing its best to make that into a Trump scale back office content processing business. Lexmark then bought a technology dating from the 1980s (ISYS Search Software once officed in Crow’s Nest I believe) and has made search a cornerstone of the Lexmark next generation health care money spinning machine. Oracle has a number of search properties. Most of these are unknown to Oracle DBAs; for example, Artificial Linguistics, TripleHop, InQuira’s shotgun NLP technology, etc. The point is that the “brands” have not had enough magnetism to pull revenues on a stand alone basis.
Successes measured in investment dollars is not revenue. Palantir is, in effect, a search and retrieval outfit packaged as a super stealthy smart intelligence system. Recorded Future, funded by Google and In-Q-Tel, is doing a bang up job with specialized content processing. There are, remember, search and retrieval companies.
The money in search appears to be made in these plays:
- The Fast Search model. Short cuts until an investigator puts a stop to the activities.
- Creating a company and then selling it to a larger firm with a firm conviction that it can turn search into a big time money machine
- Buying a search vendor to get its customers and opportunities to sell other enterprise software to those customers
- Creating a super technology play and going after venture funding until a convenient time arrives to cash out
- Pursue a dream for intelligent software and survive on research grants.
This list does not exhaust what is possible. There are me-too plays. There are mobile niche plays. There are apps which are thinly disguised selective dissemination of information services.
The point is that Autonomy is a member of the search and retrieval club. The company’s revenues came from two principal sources:
- Autonomy bought companies like Verity and video indexing and management vendor Virage and then sold other products to these firm’s clients and incorporated some of the acquired technology into products and services which allowed Autonomy to enter a new market. Remember Autonomy and enhanced video ads?
- Autonomy managed well. If one takes the time to speak with former Autonomy sales professionals, the message is that life was demanding. Sales professionals including partners had to produce revenue or some face time with the delightful Dr. Michael Lynch or other senior Autonomy executives was arranged.
That’s it. Upselling and intense management for revenues. Hewlett Packard was surprised at the simplicity of the Autonomy model and apparently uncomfortable with the management policies and procedures that Autonomy had been using in highly visible activities for more than a decade as a publicly traded company.
Perhaps some sources of funding will disagree with my view of Autonomy. That is definitely okay. I am retired. My house is paid for. I have no charming children in a private school or university.
The focus should be on what the method for generating revenue is. The technology is of secondary importance. When IBM uses “good enough” open source search, there is a message there, gentle reader. Why reinvent the wheel?
The trick is to ask the right questions. If one does not ask the right questions, the person doing the querying is likely to draw incorrect conclusions and make mistakes. Where does the responsibility rest? When one makes a bad decision?
The other point of interest should be making sales. Stated in different terms, the key question for a search vendor, regardless of camouflage, what problem are you solving? Then ask, “Will people pay money for this solution?”
If the search vendor cannot or will not answer these questions and provide data to be verified, the questioner runs the risk of taking the USS United States for a cruise as soon as you have refurbed the ship, made it seaworthy, and hired a crew.
The enterprise search sector is guilty of making a utility function appear to be a solution to business uncertainty. Why? To make sales. Caveat emptor.
Stephen E Arnold, October 8, 2015
Compare Cell Phone Usage in Various Cities
October 8, 2015
Ever wonder how cell phone usage varies around the globe? Gizmodo reports on a tool that can tell us, called ManyCities, in their article, “This Website Lets You Study Cell Phone Use in Cities Around the World.” The project is a team effort from MIT’s SENSEable City Laboratory and networking firm Ericsson. Writer Jamie Condliffe tells us that ManyCities:
“…compiles mobile phone data — such as text message traffic, number of phone calls, and the amount of data downloaded —from base stations in Los Angeles, New York, London, and Hong Kong between April 2013 and January 2014. It’s all anonymised, so there’s no sensitive information on display, but there is enough data to understand usage patterns, even down the scale of small neighbourhoods. What’s nice about the site is that there are plenty of intuitive interpretations of the data available from the get-go. So, you can see how phone use varies geographically, say, or by time, spotting the general upward trend in data use or how holidays affect the number of phone calls. And then you can dig deeper, to compare data use over time between different neighbourhoods or cities: like, how does the number of texts sent in Hong Kong compare to New York? (It peaks in Hong Kong in the morning, but in the evening in New York, by the way.)”
The software includes some tools that go a little further, as well; users can cluster areas by usage patterns or incorporate demographic data. Condliffe notes that this information could help with a lot of tasks; forecasting activity and demand, for example. If only it were available in real time, he laments, though he predicts that will happen soon. Stay tuned.
Cynthia Murrell, October 8, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
Restlet Promotes Paul Doscher to the Cloud
October 8, 2015
What has Paul Doscher been up to? We used to follow him when he was a senior executive over at LucidWorks, but he has changed companies and is now riding on clouds. PRWeb published the press release “Restlet Appoints Paul Doscher As New CEO To Accelerate Deployment Of Most Comprehensive Cloud-Based API Platform.” Doscher is the brand new president, CEO, and board member at Restlet, leading creators of deployed APIs framework. Along with LucidWorks, Doscher held executive roles at VMware, Oracle, Exalead, and BusinessObjects.
Restlet hot its start as an open source project by Jerome Louvel. Doscher will be replacing Louvel as the CEO and is quite pleased about handing over the reins to his successor:
“ ‘I’m extremely pleased that we have someone with Paul’s experience to grow Restlet’s leadership position in API platforms,’ said Louvel. ‘Restlet has the most complete API cloud platform in the industry and our ease of use makes it the best choice for businesses of any size to publish and consume data and services as APIs. Paul will help Restlet to scale so we can help more businesses use APIs to handle the exploding number of devices, applications and use cases that need to be supported in today’s digital economy.’ ”
Doscher wants to break down the barriers for cloud adoption and take it to the next level. His first task as the new CEO will be implementing the API testing tools vendor DHC and using it to enhance Restlet’s API Platform.
Restlet is ecstatic to have Doscher on board and Louvel is probably heading off to a happy retirement.
Whitney Grace, October 8, 2015
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
IBM Defines Information Access the Madison Avenue Way
October 7, 2015
Yesterday (October 6, 2015) I wrote a little dialogue about the positioning of IBM as the cognitive computing company. I had a lively discussion at lunch after the story appeared about my suggesting that IBM was making a grand stand play influenced by Madison Avenue thinking, not nuts and bolts realities of making sales and generating revenue.
Well, let’s let IBM rejiggle the line items in its financial statements. That should allow the critics of the company to see that Watson (which is the new IBM) account for IBM revenues. I am okay with that, but for me, the important numbers are the top line revenue and profit. Hey, call me old fashioned.
In the midst of the Gartner talk about IBM, the CNBC exclusive with IBM’s Big Blue dog (maybe just like the Gartner talk and thus not really “exclusive”?), and the wall paper scale ads in the New York Times and Wall Street Journal, there was something important. I don’t think IBM recognizes what it has done for the drifting, financially challenged, and incredibly fragmented search and content processing market. Even the LinkedIn enterprise search discussion group which bristles when I quote Latin phrases to the members of the group will be revivified.
Indexing and groupoiing are useful functions. When applied with judgment, an earthworm of unrelated words and phrases may communicate more effectively.
To wit, this is IBM’s definition of Watson which is search based on Lucene, home brew code, and IBM acquisitions’ software:
Author extraction—Lots of “extraction” functions
Concept expansion
Concept insights—I am not sure I understand the concept functions
Concept tagging—Another concept function
Dialog—Part of NLP maybe
Entity extraction—Extraction
Face detection with the charming acronym F****d—Were the Mad Ave folks having a bit of fun?
Feed detection—Aha, image related
Image Link extraction—Aha, keeping track of urls
Image tagging—Aha, image indexing. I wonder is this is recognition or using information in the file or a caption
Keyword extraction
Language detection
Language translation
Message resonance—No clue here in Harrod’s Creek
Natural language classifier—NLP again
Personality insights—Maybe figuring out what the personality of the author of a processed file means?
Question and answer (I think this is natural language processing which incorporates many other functions in this list)—More NLP
Relationship extraction—IBM has technology from its purchase of i2 which performs this function. How does this work on disparate streams of unstructured content? I have some thoughts
Review and rank—Does this mean relevance ranking?
Sentiment analysis—Yes, is a document with the word F****d in it positive or negative
Speech to text—Seems similar to text to speech
Taxonomy—Ah, ha. A system to generate a list of controlled terms. No humans needed? Nah, humans can be billable and it is an IBM function
Text extraction—Another extraction function
Text to speech
Tone analyzer—So what is the tone of a document containing the string F****d?
Tradeoff analytics—Hmm. Now Watson is doing a type of analytics presumably performed on text? What are the thresholds in the numerical recipe? Do the outputs make sense to a normal human?
Visual recognition—Baffller
Watson news—Is this news about Watson or news presented in Watson via a feed-type mechanism. Phrase does not even sound cool to me.
Now that’s a heck of a list. Notice that the word “search” does not appear in the list. I did not spot the word “semantics” either. Perhaps I was asleep at the switch.
When I was in freshman biology class in 1962, Dr. Daphne Swartz, a very traditional cut ‘em up and study ‘em scientist, lectured for 90 minutes about classification. I remember learning about Aristotle and this dividing organizations into two groups: Plants and animal. I know this is rocket science, but bear with me. There was the charmingly named Carolus Linnaeus, a fan of herring I believe, who cooked up the kingdom, genus, species thing. Then there was, much later, the wild and crazy library crowd which spawned Dewey or, as I named him, Mr. Decimal.
Why is this germane?
It seems to me that IBM’s list of Watson functions needs a bit of organization. In fact, some of the items appear to below to other items; for example: language detection and language translation. More egregious is the broad concept of natural language processing. One could, if one were motivated, argue that entity extraction, text extraction, and keyword extraction might look similar to a non-Watsonian intellect. Dr. Swartz would probably have some constructive criticism to offer.
What’s the purpose of this earthworm list?
Beats me. Makes IBM Watson seem more than Lucene with add ons?
Stephen E Arnold, October 7, 2015
Technology and Dark Matter: Confusion an Undesirable Force for Some
October 7, 2015
By chance, my Overflight system spit out two articles which I read one after the other.
The first was “Technological Dark Matter.” The second was “The Tyranny of Choice: Why Enterprise Tech Buyers Are Confused.” Information access mavens seem to be drifting into a philosophical mode. Deeper thinking is probably needed. Superficial thinking is not doing a very good job of dealing with issues such as the difficulty of looking for an image in the British Library collection, the dazzling irrelevance of Web search results, and trivial matters such as the online security glitches experienced by outfits which like to think they are the best and brightest around.
The Dark Matter write up confuses me. The notion of Dark Matter is that “something” is there, but it cannot be located. I don’t want to call it a physicist cheat, but darn it, if one can’t find it, maybe the notion is flawed in some way.
The write up informed me that I come into contact with “internal tools.” Well, no. I think internal tools like the other points in the write up are business processes manifested in interactions with other systems and people. These processes, if not worked out correctly, add friction to a system. Who wants to change a mainframe based system into a cloud service for free or for fun? I don’t, and I don’t know too many people who would or could. Pain, gentle reader, pain is migrating an undocumented mainframe system to a cloud hybrid confection. Nope.
The write up’s points — monetization, security, localization (which I don’t understand), long tail features (but I don’t understand the word “bajillion” either), and micro optimizations (again, baffled).
Nevertheless, the write up sparked my thoughts about the invisible, yet cost adding, functions that are not on the users’, customers’, competitors’, or consultants’ radar. Big outfits have big friction. Inefficiency is the name of the game. Now that’s Dark Matter I find interesting.
The second article struck a chord because it focuses on the relationship between complexity and confusion. The write up is more coherent than the first article. I highlighted this passage:
Brazier [a wizard from Canalys] said “rising levels of complexity” were marking it “harder for customers to keep up with everything.” This in turn made it harder for customers to make decisions, he concluded. “Prices are going up. That has clearly restricted demand.”
The complexity thing linked with confusion and prices.
The magic of juxtaposition. Technology outfits, particularly those engaged in information access, have a tough time explaining what their products do, what the products’ value is, and why the information access systems anger half or more of their users.
Consultants explain the problem in terms of governance, a term a bit like bajillions. Sounds good, means nothing. Consultants (often failed webmasters trying to get “real” work or art history majors with a knack for Photoshop) guide the helpless procurement team to a decision.
Based on my brush againsts with these groups, the choices are narrowed to established companies which are pitching software which may not work. Often a deal will be made because someone knows someone. A personal endorsement is better than an Instagram factoid.
I have three notions floating around in my mental mine drainage pond:
- Technology centric companies are faced with rising technology costs and may have fewer and fewer ways to generate more cash. Not good for investors, employees, and customers. Googlers call this the rising cost of technology’s credit card debt.
- The problems which seem to crop up with outfits like Amazon, Facebook, Google really gum up the lives of users, partners, and others involved with the company. Whether know how based like Google’s Belgium glitch or legal like the European Commission’s pursuit of monopolists, costs will be driven up.
- The notion of guidance is becoming buck passing and derrière shielding. Those long, inefficient, circular procurement processes defer a decision and accountability.
Net net: Process friction, confusion, complexity, and cost increases. The new hot buttons for information access and other technology centric companies.
Stephen E Arnold, October 7, 2015