Love Lost between Stochastic and Google AppEngine

March 30, 2012

Stochastic Technologies’ Stavros Korokithakis has some very harsh words for Google’s AppEngine in “Going from Loving AppEngine to Hating it in 9 Days.” Is the Google shifting its enterprise focus?

Stochastic’s service Dead Man’s Switch got a huge publicity boost from its recent Yahoo article, which drove thousands of new visitors to the site. Preparing for just such a surge, the company turned months ago to Google’s AppEngine to manage potential customers. At first, AppEngine worked just fine. The hassle-free deployments while rewriting and the free tier were just what the company needed at that stage.

Soon after the Yahoo piece, Stochastic knew they had to move from the free quota to a billable status. There was a huge penalty, though, for one small mistake: Korokithakis entered the wrong credit card number. No problem, just disable the billing and re-enable it with the correct information, right? Wrong. Billing could not be re-enabled for another week.

Things only got worse from there. Korokithakis attempted to change settings from Google Wallet, but all he could do was cancel the payment. He then found that, while he was trying to correct his credit card information, the AppEngine Mail API had reached its daily 100-recipient email limit. The limit would not be removed until the first charge cleared, which would take a week. The write up laments:

At this point, we had five thousand users waiting for their activation emails, and a lot of them were emailing us, asking what’s wrong and how they could log in. You can imagine our frustration when we couldn’t really help them, because there was no way to send email from the app! After trying for several days to contact Google, the AppEngine team, and the AppEngine support desk, we were at our wits’ end. Of all the tens of thousands of visitors that had come in with the Yahoo! article, only 100 managed to actually register and try out the site. The rest of the visitors were locked out, and there was nothing we could do.

Between sluggish payment processing and a bug in the Mail API, it actually took nine days before the Stochastic team could send emails and register users. The company undoubtedly lost many potential customers to the delay. In the meantime, to add charges to injury, the AppEngine task queue kept retrying to send the emails and ran up high instance fees.

It is no wonder that Stochastic is advising us all to stay away from Google’s AppEngine. Our experiences with Google have been positive. Perhaps this is an outlier’s experience?

Cynthia Murrell, March 30, 2012

Sponsored by Pandia.com

Fat Apps. What Happened to the Cloud?

February 5, 2012

If it seems like a step backward, that’s because it is: Network Computing declares,  “Fat Apps Are Where It’s At.” At least for now.

Writer Mike Fratto makes the case that, in the shift from desktop to mobile, we’re getting ahead of ourselves. Cloud-based applications that run only the user interface on mobile devices are a great way to save space– if you can guarantee constant wireless access to the Web. That’s not happening yet. Wi-Fi is unreliable, and wireless data plans with their data caps can become very expensive very quickly.

Besides, says Fratto, services that aim to place the familiar desktop environment onto mobile devices, like Citrix XenApp or VMware ThinApp, are barking up the wrong tree. The article asserts:

There isn’t the screen real estate available on mobile devices–certainly not on phones–to populate menus and pull downs. . . . But that is how desktop apps are designed. Lots of features displayed for quick access because you have the room to do it while still providing enough screen space to write a document or work on a spreadsheet. Try using Excel as a thin app on your phone or tablet. See how long it takes for you to get frustrated.

So, Fratto proposes “fat apps” as the temporary alternative, applications designed for mobile use with local storage that let you continue to work without a connection. Bloatware is back, at least until we get affordable, universal wireless access worked out.

I am getting some app fatigue. What’s the next big thing?

Cynthia Murrell, February 5, 2012

Sponsored by Pandia.com

With Relevance Trashed, Is It Gray Unitards for Online Users?

February 22, 2018

People judge the size of Internet based on their limited experiences as well as reports search engines generate, such as Google and Yahoo. Search engines compete over users by advertising that the size of their search index, but the Internet is truly bigger than any individual search index. The Search Engine Roundtable discusses how search indices do not encompass the entire Internet in the article, “Google: You Can’t Judge Index Size By One Or Two Sites.”

One example that proves you cannot determine the Internet’s size by only two search engines is comparing the search results generated by the same keywords. Both DuckDuckGo and Bing have proven more than once that they can discover Web sites Google and Yahoo cannot. Tim Bray wrote about this particular event on his blog, then it caught the attention of another developer:

“It caught Danny Sullivan’s attention on Twitter, which Danny responded that ‘I wouldn’t make that assumption for the entire web based on what’s happening with only your site.’ Tim being respected, Danny said he would dig into it, believing it may be an issue with the two example sites he cited, even though Bing and DuckDuckGo was able to index and return the content in their search engines.”

Google and yahoo now index the page in question, but this is a reminder than the Internet is a big place. The Dark Web is not picked up by regular search engines and for the amount of Web pages generated everyday it does not come as a surprise that Google and Yahoo would miss one. Maybe there is a business opportunity to develop an AI that tracks Web sites Google and other search engines have not found yet.

Whitney Grace, February 22, 2018

Google Does a Tim Andrews with Local News

February 15, 2018

Google is one of the country’s leading news providers, because it pulls its stories from many different news sources. While Google provides good news coverage for international and national affairs, local news stories are still better curated by the hometown news. Google is changing its approach to local news says Business Insider: “Google Is Building A Local News Services That Anyone Can Contribute To.”

Google’s new endeavor is called Bulletin and it allows citizen journalists to write, blog, vlog, and share their images straight from a mobile device without an official news outlet. Google wants to ramp up local stories within communities that traditional news outlets would miss.

Google wants to boost its own news service as a viable outlet and connect people with more local information, but the problem with the general concern is accuracy and quality. Google has already been cited for promoting fake news during the 2018 presidential election and providing an outlet for the average joe without valid research is a big issue.

The problem with the Internet is that people who normally are not given a voice have a medium to be heard. This has many extraordinary benefits, but it also has just as many problems. Fake news does need to be stopped, but is would Google Bulletin only be adding fuel to the fire?

The only thing to do is wait and see what happens:

“It’ll be interesting to see how this rolls out and fits into Google’s strategy for grabbing more eyeballs through its News and Search services. Beyond getting people to try Bulletin when they’re starting out reporting local news, it’ll have to incentivize them for sticking around once they get the hang of it and feel the need to grow an audience for themselves.”

Is Google embracing the spirit of the “old” America Online?

Whitney Grace, February 15, 2018

Universal Text Translation Is the Next Milestone for AI

February 9, 2018

As the globe gets smaller, individuals are in more contact with people who don’t speak their language. Or, we are reading information written in a foreign language. Programs like Google Translate are flawed at best and it is clear this is a niche waiting to be filled. With the increase of AI, it looks like that is about to happen, according to a recent GCN article, “IARPA Contracts for Universal Text Translator.”

According to the article:

The Intelligence Advanced Research Projects Activity is a step closer to developing a universal text translator that will eventually allow English speakers to search through multilanguage data sources — such as social media, newswires and press reports — and retrieve results in English.

 

The intelligence community’s research arm awarded research and performance monitoring contracts for its Machine Translation for English Retrieval of Information in Any Language program to teams headed by leading research universities paired with federal technology contractors.

 

Intelligence agencies, said IARPA project managers in a statement in late December, grapple with an increasingly multilingual, worldwide data pool to do their analytic work. Most of those languages, they said, have few or no automated tools for cross-language data mining.

This sounds like a very promising opportunity to get everyone speaking the same language. However, we think there is still a lot of room for error. We are hedging our bets on Unibabel’s AI translation software that is backed up by human editors. (They raised $23M, so they must be doing something right.) That human angle seems to be the hinge that will be a success for someone in this rich field.

Patrick Roland, February 9, 2018

Financial Research: Rumblings Get Louder

February 8, 2018

Regulations are having causing small tremors in the high altitude research business. I read “U.S. Asset Managers Shake Up Equity research as Banks Cut Back.” The write up offered several pieces of intelligence which might be considered “real” news.

First, outfits with money to invest and “churn” are hiring people who know specific things; for example, a former product manager at a company manufacturing gear related to artificial intelligence. No MBA needed was the take away for me.

Second, big money outfits have cut back on buying research. According to the article, one big money executive stopped buying bank research and learned “that he could live without most of it.”

Third, I highlighted this headache inducing statement for the providers of high end research:

Major global investment banks slashed their equity research budgets from a peak of $8.2 billion in 2008 to $3.4 billion in 2017, according to Frost Consulting. McKinsey projects the top 10 banks will cut those budgets by another 30 percent in the near term…

My question, “What happens to the Investext business?” Another one: “What acquisitions will big money companies make in order to deal with the changes in research?”

Worth watching.

Stephen E Arnold, February 8, 2018

An Upside to Fake Data

February 2, 2018

We never know if “data” are made up or actual factual. Nevertheless, we read “How Fake Data Can Help the Pentagon Track Rogue Weapons.” The main idea from our point of view is predictive analytics which can adapt to that which has not yet happened. We circled this statement from the company with the contract to make “fake” data useful under a US government contract:

IvySys Founder and Chief Executive Officer James DeBardelaben compared the process to repeatedly finding a needle in a haystack, but making both the needle and haystack look different every time. Using real-world data, agencies can only train algorithms to spot threats that already exist, he said, but constantly evolving synthetic datasets can train tools to spot patterns that have yet to occur.

Worth monitoring IvySys at https://www.ivysys.com/.

Stephen E Arnold, February 2, 2018

Google News Says Goodbye to Russian Propaganda

January 29, 2018

The United States is still reeling from possible Russian interference in the 2016 presidential election.  Every other day has some headline associated with the Trump Administration’s ties with the great bear, but what they still remain unclear.  However, one cold, hard fact is that Russia did influence online news outlets and media companies are taking steps to guarantee it does not happen again.  Motherboard reports that, “Eric Schmidt Says Google News Will ‘Engineer’ Russian Propaganda Out Of News Feed.”

Alphabet Executive Chairman Eric Schmidt has faced criticism that Google News still displays Russian Web sites in news feeds.  In response, Schmidt responded that his company is well aware of the problem and have a plan to ferret out Russian propaganda. The top two Russian news outlets that are featured in Google News are Sputnik and RT.  Both Sputnik and RT are owned by the Russian government and have ceaselessly argued their legitimacy.  Their “legitimacy” allows them to benefit from Google AdSense.

Despite the false legitimacy, Schmidt said Alphabet is aware of Russia’s plans to influence western politics:

Schmidt said the Russian strategy is fairly transparent, and usually involves ‘amplification around a message.’ That information can be “repetitive, exploitative, false, [or] likely to have been weaponized,’ he said.  ‘My own view is that these patterns can be detected, and that they can be taken down or deprioritized.’

The problem is that Alphabet has not really outlined their plans to deter Russian influence.  Russian propaganda in the news bears some similarities to the Watergate Scandal during the Nixon Administration.  We have yet to see the long-term aftermath, but it peeks our curiosity about how it will affect the United States in years to come.

Whitney Grace, January 29, 2018

Facebook and Google: An Easy Shift from Regulated to Menace

January 26, 2018

I read “George Soros: Facebook and Google a Menace to Society.” I thought the prevailing sentiment was regulation. Many industries are regulated, and some which should be like consulting are not.

The British newspaper which is popular in Harrod’s Creek for its digital commitment and its new chopped down form factor offered this nugget from George Soros, an interesting billionaire:

Facebook and Google have become “obstacles to innovation” and are a “menace” to society whose “days are numbered”, said billionaire investor and philanthropist George Soros at the World Economic Forum in Davos on Thursday. “Mining and oil companies exploit the physical environment; social media companies exploit the social environment,” said the Hungarian-American businessman, according to a transcript of his speech.

Let’s assume that Mr. Soros’ viewpoint grabs the hearts and minds of his fellow travelers. Will Facebook and Google face actions which are more than mere regulatory harnesses?

Not even good old Microsoft warranted the “menace” label. I think of menace as a word suggesting physical harm. Other definitions range from “a declaration of an intention to cause evil to happen” to scare, startle, or terrify.

Now Facebook and Google can be characterized in many ways. When we disseminate links to Facebook’s intellectual underbelly, none of the goslings is particularly frightened. When one of the DarkCyber researchers to I run a query on the GOOG, our blood does not run cold. We sigh, and run the same query on different systems, even www.searx.me which is often quite useful.

In my opinion, the PR stakes are rising for these superstars of the Silicon Valley way.

This will be interesting. Perhaps Philz Coffee fueled protests will become more common in Plastic Fantasticland. Could some wealthy Davos types fund such gatherings? The T shirts could become collectibles too.

Stephen E Arnold, January 26, 2018

IBM and Algorithmic Bias

January 25, 2018

I read “Unexplainable Algos? Get Off the Market, Says IBM Chief Ginni Rometty.” The idea is in line with Weapons of Math Destruction and the apparent interest in “black box” solutions. If you are old enough, you will remember the Autonomy IDOL system. It featured a “black box” which licensees used without the ability to alter how the system operated. You may also recall that the first Google Search Appliances locked users out as well. One installed the GSA and it just worked—at least, in theory.

This article includes information derived from the IBM content output for the World Economic Forum where it helps to have one’s own helicopter for transportation.

I noted this statement:

“When it comes to the new capabilities of artificial intelligence, we must be transparent about when and how it is being applied and about who trained it, with what data, and how,” the IBM chairman, president and CEO wrote.

I don’t want to be too picky but IBM owns the i2 Analyst Notebook system. If you are not familiar with this platform, it provides law enforcement and intelligence professionals with tools to organize, analyze, and marshal information for an investigation. As a former consultant to i2, I am not sure if the plumbing developed by i2 is public. In fact, IBM and Palantir jousted in court when IBM sued Palantir for improper use of its intellectual property; that is a fancy way of saying, “Palantir engineers tried to figure out how i2 worked.” The case settled out of court and many of the documents are sealed because no one party to the case wanted certain information exposed to bright sunlight.

IBM operates a number of cybersecurity services. One of these has the ability to intercept a voice call and map that call to email and other types of communications. The last time I received some information about this service I had to sign a bundle of documents. The idea, of course, is that much of the technology was, from my point of view, a “black box.”

So what?

The statement by IBM’s CEO is important because it is, in my opi9nion, hand waving. IBM deals in systems which are neither fully understood by some of the IBM experts selling these solutions, and some of the engineers who may know more about the inner working of secret or confidential systems and methods are not talking. An expert knows stuff others do not; therefore, why talk and devalue one’s expertise.

To sum up, talk about making math centric systems and procedures transparent is just noise. The number of people who can explain how systems which emerged from Cambridge University like Autonomy’s Neurolinguistic System or i2’s Analyst Notebook are in short supply.

How can one who does not understand explain how a complex system works. Black boxes exist to keep those which thumbs for fingers from breaking what works.

Talk doesn’t do much to deal with the algorithmic basics:

  1. Some mathematical procedures in wide use are not easily explained or reverse engineered; hence, the IBM charge that Palantir tried a short cut through the words to the cookie jar
  2. Most next generation systems are built on a handful of algorithms. I have identified 10 which I explain in my lectures about the flaws embedded in “smart” systems. Each of the most widely used algorithms can be manipulated in a number of ways. Some require humans to fiddle; other fiddle when receiving inputs from other systems.
  3. Explainable systems are based on rules. By definition, one assumes the rules work as the authors intended. News flash. Rule based systems can behave in unpredictable, often inexplicable ways. A fun example is for you, gentle reader, to try and get the default numbering system in Microsoft Word to perform consistently with regard to left justification of numbered lists.
  4. Chain a series of algorithms together in a work flow. Add real time data to update thresholds. Watch the outputs. Now explain what happened. Good luck with that.

I love IBM. Always marketing.

Stephen E Arnold, January 25, 2018

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta