Google: Big Is Good. Huge Is Better.

January 15, 2021

I spotted an interesting datum factoid. The title of the article gives away the “reveal” as thumbtypers are prone to say. “Google Trained a Trillion-Parameter AI Language Model” does not reference the controversial “draft research paper” by a former Google smart software person named Timnit Gebru. The point at issue is that smart software can be trained using available content. Bingo, the smart software reflects the biases in the source content.

Pumping up numbers is interesting and begs the question, “Why is Google shifting into used car sales person mode?” The company has never been adept at communicating or marketing in a clear, coherent manner. How many blog posts about Google’s overlapping services have I seen in the last 20 years? The answer is, “A heck of a lot.”

I circled this passage in the write up:

Google researchers developed and benchmarked techniques they claim enabled them to train a language model containing more than a trillion parameters. They say their 1.6-trillion-parameter model, which appears to be the largest of its size to date, achieved an up to 4 times speedup over the previously largest Google-developed language model (T5-XXL).

Got that?

Like supremacy, the trillion parameter AI language model” revolutionizes big.

Google? What’s with the marketing push for the really expensive and money losing DeepMind thing? Big numbers too.

Stephen E Arnold, January 15, 2021

Traffic: Can a Supercomputer Make It Like Driving in 1930?

January 12, 2021

Advertisers work long and hard to find roads which are scenic and can be “managed” with the assistance of some government authorities to be perfect. The idea is that a zippy new vehicle zooms along a stretch of tidy highway (no litter or obscene slogans spray painted on billboards, please). Behind the wheel or the semi-autonomous driver seat is a happy person. Zoom, zoom, zoom. (I once knew a poet named Alex Kuo. He wrote poems about driving. I found this interesting, but I hate driving, flying, or moving anywhere outside of my underground office in rural Kentucky.

I also read a book called Traffic: Why We Drive the Way We Do (and What It Says about Us). I recall the information about Los Angeles’ super duper traffic management computer. If my memory is working this morning, the super duper traffic computer made traffic worse. An individual with some numerical capability can figure out why. Let those chimpanzees throw darts at a list of publicly traded security and match the furry entity with the sleek MBA. Who wins? Yeah.

I thought about the hapless people who have to deal with driving, riding trains, or whatever during the Time of Rona. Better than pre Rona, but not by much. Humans travel according the habit, the age old work when the sun shines adage, or because clumping is baked into our DNA.

The problem is going to be solved, at least that’s the impression I obtained from “Could a Supercomputer Help Fix L.A.’s Traffic Problems?” Now traffic in Chicago sucks, but the wizards at the Argonne National Laboratory are going to remediate LaLa Land. I learned:

The Department of Energy’s Argonne National Laboratory is leading a project to examine traffic data sets from across the Los Angeles region to develop new strategies to reduce traffic congestion.

And what will make the difference this time? A supercomputer. How is that supercomputer doing with the Covid problem? Yeah, right.

The write up adds:

Super computers at the Argonne Laboratory are able to take a year’s worth of traffic data gathered from some 11,160 sensors across southern California, as well as movement data from mobile devices, to build forecasting models. They can then be applied to simulation projects.

Who in LA has the ball?

Not the LA Department of Transportation. Any other ideas?

And how was driving in LA in 1930? Pretty awful according to comments made by my mother.

Stephen E Arnold, January 12, 2021

Google: A Rose by Any Other Name Could Be Fully Autonomous

January 7, 2021

I spotted an interesting article called “Waymo Shelves Self Diving Term for Its Technology to Shore Up Safety.” The write up explains:

Waymo will call its technology “fully autonomous” to create, what it believes, is an important distinction. The company’s argument rests entirely on how the public perceives “self-driving” as a term.

As Google tries to solve the “problem” or respond to the “opportunity” for vehicles in which humans can play with their mobile devices instead of driving is bigly. I want to point out that Google and the others pitching this nirvana for motorists and advertisers have not solved some of the tricky issues. Crashing a car due to road markers? Mistaking a small dog as a puddle? These are not “problems”; they are outliers. Black sheep get hit by smart software too.

The fix is not the lingo. The fix is to begin to change the roadways to make the “opportunity” less fraught. How’s Google doing solving death and handling its labor “opportunity”? Yeah. Quantum supremacy too.

Stephen E Arnold, January 7, 2021

The Mainframe Wants to Be Young Again

January 7, 2021

I know that the idea of a mainframe being young is not widespread. I am not certain that a TikTok video has been created to demonstrate the spring in the septuagenarian’s step. I learned about the mainframe’s most recent aspirations in “Big Mainframe Computing.” I noted this statement:

BMC is now aiming to help build what it (and everybody else in the industry) is calling the autonomous digital enterprise but putting the artificial intelligence (AI) in mAInframe. The company now refers to the joint BMC Automated Mainframe Intelligence (AMI) and Compuware portfolios… and this is the world of Ops plus AI = AIOps.

I quite like the realization that the letters “ai” appear in the word “mainframe.” From my perspective, innovations are chugging along. Companies like Apple, AMD, and nVidia are crafting solutions likely to create additional computing alternatives for “smart software.”

Would you pay to attend a ballet performed by the people in pharmaceutical advertisements on cable nightly news programs?

I know I would not because I am 77 and know that what was possible in the 1960s is a bit of a challenge.

Stephen E Arnold, January 7, 2021

Facebook Focuses on an AI-Driven Future

January 7, 2021

Thousands of Facebook employees were treated to project announcements and product demos at the company’s end-of-the-year meeting. BuzzFeed News got its hands on an audio recording of the proceedings and shares a few highlights in, “Facebook Is Developing a Tool to Summarize Articles so You Don’t Have to Read Them.” As have many of us, Facebook has had a challenging year. However, company executives painted a positive picture at the meeting. We are not surprised to see the company pinning many hopes on AI. Writer Ryan Mac reports:

“Despite the turmoil, the company’s leaders said the social networking company has moved forward, adding some 20,000 new workers this year. With more people around the world at home, the company has experienced record usage, said Facebook Chief Technology Officer Mike Schroepfer. … Among the advancements touted by Schroepfer were the company’s commitments to artificial intelligence, which has often been seen internally as a panacea to the social network’s ills. He noted that Facebook’s data centers were receiving ‘new systems’ that would make them 10 to 30 times faster and allow Facebook’s artificial intelligence (AI) to essentially train itself. ‘And it is actually the key tool we are using right now today in production to fight hate speech, misinformation, and honestly the hardest possible content problems we face,’ Schroepfer said, noting a company talking point that Facebook now detects 95% of all hate speech on the platform. In recent weeks, departing Facebook employees have pushed back on the idea that AI could cure the company’s content moderation problems. While Facebook employs thousands of third-party human moderators, it’s made it clear that AI is how it plans to patrol its platform in the future, an idea that concerns workers.”

The employees are right to be concerned. Experience shows AI is still a long way from consistently discerning nuance in human language. Facebook will save money by creating algorithms over hiring people to moderate the platform, but that will do little to stem the tide of false information or hateful speech. Another new tool that leverages AI looks like another way to spread false, or at least incomplete, information—”TL;DR” (the popular abbreviation for “Too Long, Didn’t Read”) will summarize news articles into bulleted lists. No word on whether this is connected to Semantic Scholar’s tool by the same name designed for use with academic texts.

Other curious announcements include the development of a universal translator (a la Star Trek) and “Horizon,” a VR social network where users’ avatars can hang out together (inspired by pandemic isolation, perhaps?) Then there is the brain-reading device. Last year Facebook bought neural-interface startup CRTL-labs and has since made progress on a device that can translate thoughts into physical actions. Potential applications, says Schroepfer, include typing, manipulating virtual objects, or driving a character in a video game. Will they put that together with the Horizon project? Hmmm.

Cynthia Murrell, January 7, 2021

Artistic License Gives Way to Artistic Bias

January 5, 2021

It is a fact that AI facial recognition technology is skewed towards favoring white males. The algorithms misidentify non-white people and females. Facial technology is biased because the developers are generally white males and use their images as data to test the algorithms. A new trend in AI is making art from algorithms, but Venturebeat says there unintended biases in art too: “Researchers Find Race, Gender, And Style Biases In Art-Generating AI Systems.”

Fujitsu researchers discovered that AI art algorithms include socioeconomic impacts and have clear prejudices. The researchers covered a lot of ground for their study:

“In their work, the researchers surveyed academic papers, online platforms, and apps that generate art using AI, selecting examples that focused on simulating established art schools and styles. To investigate biases, they considered state-of-the-art AI systems trained on movements (e.g., Renaissance art, cubism, futurism, impressionism, expressionism, post-impressionism, and romanticism), genres (landscapes, portraits, battle paintings, sketches, and illustrations), materials (woodblock prints, engravings, paint), and artists (Clementine Hunter, Mary Cassatt, Vincent van Gogh, Gustave Doré, Gino Severini).”

The researchers discovered that when different artworks were translated into different styles or altered dark skin color was not preserved and long-haired men were confused for female. Also artwork from different style eras did not translate well to others. The problem comes from the same issues as facial recognition technology lack of diverse data and inconsistent labeling:

“The researchers peg the blame on imbalances in the datasets used to train generative AI models, which they note might be influenced by dataset curators’ preferences. One app referenced in the study, AI Portraits, was trained using 45,000 Renaissance portraits of mostly white people, for example. Another potential source of bias could be inconsistencies in the labeling process, or the process of annotating the datasets with labels from which the models learn, according to the researchers. Different annotators have different preferences, cultures, and beliefs that might be reflected in the labels that they create.”

The article ends with a warning that AI generated art could lead to “false perceptions about social, cultural, and political aspects of past times and hinder awareness about important historical events. For this reason, they urge AI researchers and practitioners to inspect the design choices and systems and the sociopolitical contexts that shape their use.”

There is no argument that diverse data is needed to perfect AI generated art, however, there is going to be a lack of it simply because it does not exist. These art movements, especially those from Europe will not be ethnically or sexually diverse. The art movements will be more sexually diverse than ethnically, because there were female artists and females were painted in a lot of pictures. Ethnically, however, Europe is where these art movements began and white people came from, so the data will be skewed in that favor.

Modern artists of all ethnicities and genders can imitate these art styles, but that does not make them authentic to the era. There exceptions to this rule, of course, but they are limited. It is similar to asking for a Chinese eye witness account to New World colonization or wanting to know how Beethoven was influenced by African music. They simply do not exist.

Instead of concentrating solely on European art movements, why not incorporate African, Asian, and aboriginal art from the same eras? It will provide diverse data from real era appropriate art with lots of different styles.

Whitney Grace, January 5, 2021

Amazon Top Dog Video: A Minor Omission

January 1, 2021

If you are an Amazon watcher, you will enjoy the production values and some of the examples in the video “How Amazon Became the Top Dog in Artificial Intelligence: Tech Video.” On the plus side, the producer has pumped some bucks into visuals intended to represent the way Amazon’s technology functions. Keep in mind that the program is a video, and not a white board in an Amazon tech center. There is one downside or what I call a “minor omission” in the program. The oversight is probably irrelevant because Amazon itself goes out of its way to choke off the flows of information about one of its more interesting businesses: Policeware. Amazon is the plumbing for some of the most widely used policeware vendors who specialize in aggregating and analyzing information to solve crimes. Plus Amazon has a pride of lion hearted entrepreneurs who are developing next generation policeware for their government customers. Also, Amazon has some “interesting” partners who match up products, services, features, and functions for government projects. Are you watching Dubai’s use of AWS? Ah, well, there’s 2021 to dive into that topic. The policeware angle is not to be found in the video. Oversight? Amazon’s influence? Cutting room floor?

Stephen E Arnold, January 1, 2021

Smart Software and Cyber Security

December 30, 2020

Smart software appears to be the solution to escalating cyber security woes. An unusual article (actually more of a collection of dot points) provides some insight into the challenges makers of smart security software have to overcome. Navigate to “What is the Impact of Artificial Intelligence on Cyber Security?” and scroll to the section titled “Why Did Artificial Intelligence Fail?” Here are three of the 10 reasons:

  • When you stuck in a never-ending development loop
  • Most AI models decay overtime
  • Optimizing for the wrong thing.

Before I read the article, I had been operating on a simple principle: Smart cyber security software is an oxymoron. Yikes. I did not know I was stuck in a never ending development loop or optimizing for the wrong thing.

The article offers a number of statements which, I assume, are intended to be factoids. In reality, the collection of information is a gathering of jargon and sales babble.

The write up reveals how to get rid of security smart software failures. There are seven items on this list. Here’s one: Statistical Methodology.

Several observations:

  • Smart software works when knowns are trimmed to a manageable “space”.
  • The “space” is unfortunately dynamic, so the AI has to be able to change. It usually needs the help of humans and an often expensive retraining cycle.
  • The known space is what the best of the bad actors use in order to attack in new ways.

Net net: The SolarWinds’ misstep illustrates that exactly zero of the classified systems used to monitor adversaries’ cyber attacks rang the klaxon. To make matters more embarrassing, exactly zero of the commercial threat intelligence and cyber monitoring systems punched a buzzer either.

Conclusion: Lists and marketing hoo had are not delivering. The answer to the question What is the impact of artificial intelligence on security? is an opportunity to over promise and under deliver perhaps?

Stephen E Arnold, December 30, 2020

Google and Its Smart Software

December 28, 2020

I spotted “What AlphaGo Can Teach Us About How People Learn.” The subtitle is Google friendly:

David Silver of DeepMind, who helped create the program that defeated a Go champion, thinks rewards are central to how machines—and humans—acquire knowledge.

The write up contains a number of interesting statements. You will want to work through the essay and excavate those which cause your truth meter to vibrate with excitement. I noted this segment:

I don’t want to put a timescale on it [general artificial intelligence], but I would say that everything that a human can achieve, I ultimately think that a machine can. The brain is a computational process, I don’t think there’s any magic going on there.

I noted the “everything.” That’s an encompassing term. In fact, the term “everything” effectively means the old saw from Paradise Lost”

O sun, to tell thee how I hate thy beams, That bring to my remembrance from what state I fell; how glorious once above thy sphere; Till pride and worse ambition threw me down, Warring in heaven against heaven’s matchless King. (IV, 37–41)

I also noted this Venture Beat write up called “DeepMind’s Big Losses and the Questions around Running an AI Lab.” The MBA speak cannot occlude this factoid (which I assume is close enough for horse shoes):

According to its annual report filed with the UK’s Companies House register, DeepMind has more than doubled its revenue, raking in £266 million in 2019, up from £103 million in 2018. But the company’s expenses continue to grow as well, increasing from £568 million in 2018 to £717 in 2019. The overall losses of the company grew from £470 million in 2018 to £477 million in 2019.

Doing “everything” does seem to be expensive. It was expensive for IBM to get Watson on the Jeopardy show. Google has pumped money into DeepMind to nuke a hapless human Go player.

I also noted this write up: “Google Told Scientists to Use a Positive Tone in AI Research, Documents Show.” I noted this passage:

Four staff researchers, including the senior scientist Margaret Mitchell, said they believe Google is starting to interfere with crucial studies of potential technology harms.

Beyond Search believes that these write ups make clear:

  1. Google is in the midst of a public relations offensive. Perhaps it is more of a singularity than Google’s announcements about quantum computing. My hunch is that Timnit Gebru’s experience may be an example of Google-entanglement.
  2. Google is trotting out the big dogs to provide an explainer about “everything.” Wait. Isn’t that a logical impossibility like the Godel thing?
  3. Google is in the midst of another high school science club management moment. The effort is amusing in a high school science club way.

Net net: My take is that Google announced that it would “solve death.” This did not happen. “Everything”, therefore, is another example of the Arnold Law of Online: “Online fosters a perception that one is infallible, infinite, and everlasting.” Would anyone wager some silver on the veracity of my Law?

Stephen E Arnold, December 28, 2020

Arthur.ai Designed To Ensure Accuracy In Machine Learning Models

December 28, 2020

Most tech companies are investing their capital in designing machine learning models, but Arthur.ai decided to do something different. TechCrunch reveals Arthur.ai’s innovation in “Arthur.ai Snags $15M Series A To Grow Machine Learning Monitoring Tool.” The Arthur.ai is designed to ensure that machine learning models retain their precise accuracy over time.

Despite being fine tuned algorithms, machine learning AI needs maintenance like any other technology. Index Ventures saw the necessity of such a tool and lead a Series A round of funding with investments from Homebrew, AME Ventures, Workbench, Acrew, and Plexo Capital:

“Investor Mike Volpi from Index certainly sees the value proposition of this company. “One of the most critical aspects of the AI stack is in the area of performance monitoring and risk mitigation. Simply put, is the AI system behaving like it’s supposed to?” he wrote in a blog post announcing the funding.

Arthur.ai has doubled its employees since it was a startup and founder and CEO Adam Wenchel wants to continue expansion. AWS released a similar tool called SageMaker Clarity, but Wenchel views the potential competition as affirmation. If there are products that provide the same service that means there is a market for it. He is also not worried about larger cloud companies, because Arthur.ai will focus entirely on its maintenance tools while the larger ones are strained.

Whitney Grace, December 28, 2020

Next Page »

  • Archives

  • Recent Posts

  • Meta