Google AI Has a New Competitive Angle: AI Is a Bit of Problem for Everyone Except Us, Of Course

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google has not recovered from the MSFT Davos PR coup. The online advertising company with a wonderful approach to management promptly did a road show in Paris which displayed incorrect data. Next the company declared a Code Red emergency (whatever that means in an ad outfit). Then the Googley folk reorganized by laterally arabesque-ing Dr. Jeff Dean somewhere and putting smart software in the hands of the DeepMind survivors. Okay, now we are into Phase 2 of the quantumly supreme company’s push into smart software.

image

An unknown person in Hyde Park at Speaker’s Corner is explaining to the enthralled passers by that “AI is like cryptocurrency.” Is there a face in the crowd that looks like the powerhouse behind FTX? Good enough, MSFT Copilot.

A good example of this PR tactic appears in “Google DeepMind Co-Founder Voices Concerns Over AI Hype: ‘We’re Talking About All Sorts Of Things That Are Just Not Real’.” Some additional color similar to that of sour grapes appears in “Google’s DeepMind CEO Says the Massive Funds Flowing into AI Bring with It Loads of Hype and a Fair Share of Grifting.”

The main idea in these write ups is that the Top Dog at DeepMind and possible candidate to take over the online ad outfit is not talking about ruing the life of a Go player or folding proteins. Nope. The new message, as I understand it, AI is just not that great. Here’s an example of the new PR push:

The fervor amongst investors for AI, Hassabis told the Financial Times, reminded him of “other hyped-up areas” like crypto. “Some of that has now spilled over into AI, which I think is a bit unfortunate,” Hassabis told the outlet. “And it clouds the science and the research, which is phenomenal.”

Yes, crypto. Digital currency is associated with stellar professionals like Sam Bankman-Fried and those engaged in illegal activities. (I will be talking about some of those illegal activities at the US National Cyber Crime Conference in a few weeks.)

So what’s the PR angle? Here’s my take on the message from the CEO in waiting:

  1. The message allows Google and its numerous supporters to say, “We think AI is like crypto but maybe worse.”
  2. Google can suggest, “Our AI is not so good, but that’s because we are working overtime to avoid the crypto-curse which is inherent in outfits engaged in shoving AI down your throat.”
  3. Googlers gardons la tête froide unlike the possibly criminal outfits cheerleading for the wonders of artificial intelligence.

Will the approach work? In my opinion, yes, it will add a joke to the Sundar and Prabhakar Comedy Act. No, I don’t think it will not alter the scurrying in the world of entrepreneurs, investment firms, and “real” Silicon Valley journalists, poohbahs, and pundits.

Stephen E Arnold, April 2, 2024

Social Media: Do You See the Hungry Shark?

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

After years of social media’s diffusion, those who mostly ignored how flows of user-generated content works like a body shop’s sandblaster. Now that societal structures are revealing cracks in the drywall and damp basements, I have noticed an uptick in chatter about Facebook- and TikTok-type services. A recent example of Big Thinkers’ wrestling with what is a quite publicly visible behavior of mobile phone fiddling is the write up in Nature “The Great Rewiring: Is Social Media Really Behind an Epidemic of Teenage Mental Illness?”

image

Thanks, MSFT Copilot. How is your security initiative coming along? Ah, good enough.

The article raises an interesting question: Are social media and mobile phones the cause of what many of my friends and colleagues see as a very visible disintegration of social conventions. The fabric of civil behavior seems to be fraying and maybe coming apart. I am not sure the local news in the Midwest region where I live reports the shootings that seem to occur with some regularity.

The write up (possibly written by a person who uses social media and demonstrates polished swiping techniques) wrestles with the possibility that the unholy marriage of social media and mobile devices may not be the “problem.” The notion that other factors come into play is an example of an established source of information working hard to take a balanced, rational approach to what is the standard of behavior.

The write up says:

Two things can be independently true about social media. First, that there is no evidence that using these platforms is rewiring children’s brains or driving an epidemic of mental illness. Second, that considerable reforms to these platforms are required, given how much time young people spend on them.

Then the article wraps up with this statement:

A third truth is that we have a generation in crisis and in desperate need of the best of what science and evidence-based solutions can offer. Unfortunately, our time is being spent telling stories that are unsupported by research and that do little to support young people who need, and deserve, more.

Let me offer several observations:

  1. The corrosive effect of digital information flows is simply not on the radar of those who “think about” social media. Consequently, the inherent function of online information is overlooked, and therefore, the rational statements are fluffy.
  2. The only way to constrain digital information and the impact of its flows is to pull the plug. That will not happen because of the drug cartel-like business models produce too much money.
  3. The notion that “research” will light the path forward is interesting. I cannot “trust” peer reviewed papers authored by the former president of Stanford University or the research of the former Top Dog at Harvard University’s “ethics” department. Now I am supposed to believe that “research” will provide answers. Not so fast, pal.

Net net: The failure to understand a basic truth about how online works means that fixes are not now possible. Sound gloomy? You are getting my message. Time to adapt and remain flexible. The impacts are just now being seen as more than a post-Covid or economic downturn issue. Online information is a big fish, and it remains mostly invisible. The good news is that some people have recognized that the water in the data lake has powerful currents.

Stephen E Arnold, April 2, 2024

Google Mandates YouTube AI Content Be Labeled: Accurately? Hmmmm

April 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The rules for proper use of AI-generated content are still up in the air, but big tech companies are already being pressured to induct regulations. Neowin reported that “Google Is Requiring YouTube Creators To Post Labels For Realistic AI-Created Content” on videos. This is a smart idea in the age of misinformation, especially when technology can realistically create images and sounds.

Google first announced the new requirement for realistic AI-content in November 2023. The YouTube’s Creator Studio now has a tool in the features to label AI-content. The new tool is called “Altered content” and asks creators yes and no questions. Its simplicity is similar to YouTube’s question about whether a video is intended for children or not. The “Altered content” label applies to the following:

• “Makes a real person appear to say or do something they didn’t say or do

• Alters footage of a real event or place

• Generates a realistic-looking scene that didn’t actually occur”

The article goes on to say:

“The blog post states that YouTube creators don’t have to label content made by generative AI tools that do not look realistic. One example was “someone riding a unicorn through a fantastical world.” The same applies to the use of AI tools that simply make color or lighting changes to videos, along with effects like background blur and beauty video filters.”

Google says it will have enforcement measures if creators consistently don’t label their realistic AI videos, but the consequences are specified. YouTube will also reserve the right to place labels on videos. There will also be a reporting system viewers can use to notify YouTube of non-labeled videos. It’s not surprising that Google’s algorithms can’t detect realistic videos from fake. Perhaps the algorithms are outsmarting their creators.

Whitney Grace, April 2, 2024

AI and Job Wage Friction

April 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read again “The Jobs Being Replaced by AI – An Analysis of 5M Freelancing Jobs,” published in February 2024 by Bloomberg (the outfit interested in fiddled firmware on motherboards). The main idea in the report is that AI boosted a number of freelance jobs. What are the jobs where AI has not (as yet) added friction to the money making process. Here’s the list of jobs NOT impeded by smart software:

Accounting

Backend development

Graphics design

Market research

Sales

Video editing and production

Web design

Web development

Other sources suggest that “Accounting” may be targeted by an AI-powered efficiency expert. I want to watch how this profession navigates the smart software in what is often a repetitive series of eye glazing steps.

image

Thanks, MSFT Copilot. How are doing doing with your reorganization? Running smoothly? Yeah. Smoothly.

Now to the meat of the report: What professions or jobs were the MOST affected by AI. From the cited write up, these are:

Customer service (the exciting, long suffering discipline of chatbots)

Social media marketing

Translation

Writing

The write up includes another telling chunk of data. AI has apparently had an impact on the amount of money some customers were willing to pay freelancers or gig workers. The jobs finding greater billing friction are:

Backend development

Market research

Sales

Translation

Video editing and production

Web development

Writing

The article contains quite a bit of related information. Please, consult the original for a number of almost unreadable graphics and tabular data. I do want to offer several observations:

  1. One consequence of AI, if the data in this report are close enough for horseshoes, is that smart software drives down what customers will pay for a wide range of human centric services. You don’t lose your job; you just get a taste of Victorian sweat shop management thinking
  2. Once smart software is perceived as reasonably capable, demand and pay for good enough translation, smart software is embraced. My view is that translation services are likely to be a harbinger of how AI will affect other jobs. AI does not have to be great; it just has to be perceived as okay. Then. Bang. Hasta la vista human translators except for certain specialized functions.
  3. Data like the information in the Bloomberg article provide a handy road map for AI developers. The jobs least affected by AI become targets for entrepreneurs who find that low-hanging fruit like translation have been picked. (Accountants, I surmise, should not relax to much.)

Net net: The wage suppression angle and the incremental adoption of AI followed by quick adoption are important ideas to consider when analyzing the economic ripples of AI.

Stephen E Arnold, April 1, 2024

Open Source Software: Fool Me Once, Fool Me Twice, Fool Me Once Again

April 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Open source is shoved in my face each and every day. I nod and say, “Sure” or “Sounds on point”. But in the back of my mind, I ask myself, “Am I the only one who sees open source as a way to demonstrate certain skills, a Hail, Mary, in a dicey job market, or a bit of MBA fancy dancing. I am not alone. Navigate to “Software Vendors Dump Open Source, Go for Cash Grab.” The write up does a reasonable job of explaining the open source “playbook.”

The write up asserts:

A company will make its program using open source, make millions from it, and then — and only then — switch licenses, leaving their contributors, customers, and partners in the lurch as they try to grab billions.

Yep, billions with a “B”. I think that the goal may be big numbers, but some open source outfits chug along ingesting venture funding and surfing on assorted methods of raising cash and never really get into “B” territory. I don’t want to name names because as a dinobaby, the only thing I dislike more than doctors is a legal eagle. Want proper nouns? Sorry, not in this blog post.

image

Thanks, MSFT Copilot. Where are you in the open source game?

The write up focuses on Redis, which is a database that strikes me as quite similar to the now-forgotten Pinpoint approach or the clever Inktomi method to speed up certain retrieval functions. Well, Redis, unlike Pinpoint or Inktomi is into the “B” numbers. Two billion to be semi-exact in this era of specious valuations.

The write up says that Redis changed its license terms. This is nothing new. 23andMe made headlines with some term modifications as the company slowly settled to earth and landed in a genetically rich river bank in Silicon Valley.

The article quotes Redis Big Dogs as saying:

“Beginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD).”

I think this means, “Pay up.”

The author of the essay (Steven J. Vaughan-Nichols) identifies three reasons for the bait-and-switch play. I think there is just one — money.

The big question is, “What’s going to happen now?”

The essay does not provide an answer. Let me fill the void:

  1. Open source will chug along until there is a break out program. Then whoever has the rights to the open source (that is, the one or handful of people who created it) will look for ways to make money. The software is free, but modules to make it useful cost money.
  2. Open source will rot from within because “open” makes it possible for bad actors to poison widely used libraries. Once a big outfit suffers big losses, it will be hasta la vista open source and “Hello, Microsoft” or whoever the accountants and lawyers running the company believe care about their software.
  3. Open source becomes quasi-commercial. Options range from Microsoft charging for GitHub access to an open source repository becoming a membership operation like a digital Mar-A-Lago. The “hosting” service becomes the equivalent of a golf course, and the people who use the facilities paying fees which can vary widely and without any logic whatsoever.

Which of these three predictions will come true? Answer: The one that affords the breakout open source stakeholders to generate the maximum amount of money.

Stephen E Arnold, April 1, 2024

Cow Control or Elsie We Are Watching

April 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Australia already uses remotely controlled drones to herd sheep. Drones are considered more ethnical tax traditional herding methods because they’re less stressful for sheep.

Now the island continent is using advanced tracking technology to monitor buffalos and cows. Euro News investigates how technology is used in the cattle industry: “Scientists Are Attempting To Track 1000 Cattle And Buffalo From Using GPS, AI, And Satellites.”

An estimated 22000 buffalo freely roam in Arnhem Land, Australia. The emphasis is on estimate, because the exact number is unknown. These buffalo are harming Arnhem Land’s environment. One feral buffalo weighing 1200 kilograms and 188 cm not only damages the environment by eating a lot of plant life but also destroys cultural rock art, ceremonial sites, and waterways. Feral buffalos and cattle are major threats to Northern Australia’s economy and ecology.

Scientists, cattlemen, and indigenous rangers have teamed up to work on a program that will monitor feral bovines from space. The program is called SpaceCows and will last four years. It is a large-scale remote herd management system powered by AI and satellite. It’s also supported by the Australian government’s Smart Farming Partnership.

The rangers and stockmen trap feral bovines, implant solar-powered GPS tags, and release them. The tags transmit the data to a space satellite located 650 km away for two years or until it falls off. SpaceCows relies on Microsoft Azure’s cloud platform. The satellites and AI create a digital map of the Australian outback that tells them where feral cows live:

“Once the rangers know where the animals tend to live, they can concentrate on conservation efforts – by fencing off important sites or even culling them. ‘There’s very little surveillance that happens in these areas. So, now we’re starting to build those data sets and that understanding of the baseline disease status of animals,’ said Andrew Hoskins, a senior research scientist at CSIRO.

If successful, it could be one of the largest remote herd management systems in the world.”

Hopefully SpaceCows will protect the natural and cultural environment.

Whitney Grace, April 1, 2024

AI and Stupid Users: A Glimpse of What Is to Come

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

When smart software does not deliver, who is responsible? I don’t have a dog in the AI fight. I am thinking about deployment of smart software in professional environments. When the outputs are wonky or do not deliver the bang of a  competing system, what is the customer supposed to do. Is the vendor responsible? Is the customer responsible? Is the person who tried to validate the outputs guilty of putting a finger on the scale of a system which its developers cannot explain exactly how an output was determined? Viewed from one angle, this is the Achilles’ heel of artificial intelligence. Viewed from another angle determining responsibility is an issue which, in my opinion, will be decided by legal processes. In the meantime, the issue of a system’s not working can have significant consequences. How about those automated systems on aircraft which dive suddenly or vessels which can jam a ship channel?

I read a write up which provides a peek at what large outfits pushing smart software will do when challenged about quality, accuracy, or other subjective factors related to AI-imbued systems. Let’s take a quick look at “Customers Complain That Copilot Isn’t As Good as ChatGPT, Microsoft Blames Misunderstanding and Misuse.”

The main idea in the write up strikes me as:

Microsoft is doing absolutely everything it can to force people into using its Copilot AI tools, whether they want to or not. According to a new report, several customers have reported a problem: it doesn’t perform as well as ChatGPT. But Microsoft believes the issue lies with people who aren’t using Copilot correctly or don’t understand the differences between the two products.

Yep, the user is the problem. I can imagine the adjudicator (illustrated as a mother) listening to a large company’s sales professional and a professional certified developer arguing about how the customer went off the rails. Is the original programmer the problem? Is the new manager in charge of AI responsible? Is it the user or users?

image

Illustration by MSFT Copilot. Good enough, MSFT.

The write up continues:

One complaint that has repeatedly been raised by customers is that Copilot doesn’t compare to ChatGPT. Microsoft says this is because customers don’t understand the differences between the two products: Copilot for Microsoft 365 is built on the Azure OpenAI model, combining OpenAI’s large language models with user data in the Microsoft Graph and the Microsoft 365 apps. Microsoft says this means its tools have more restrictions than ChatGPT, including only temporarily accessing internal data before deleting it after each query.

Here’s another snippet from the cited article:

In addition to blaming customers’ apparent ignorance, Microsoft employees say many users are just bad at writing prompts. “If you don’t ask the right question, it will still do its best to give you the right answer and it can assume things,” one worker said. “It’s a copilot, not an autopilot. You have to work with it,” they added, which sounds like a slogan Microsoft should adopt in its marketing for Copilot. The employee added that Microsoft has hired partner BrainStorm, which offers training for Microsoft 365, to help create instructional videos to help customers create better Copilot prompts.

I will be interested in watching how these “blame games” unfold.

Stephen E Arnold, March 29, 2024

How to Fool a Dinobaby Online

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Marketers take note. Forget about gaming the soon-to-be-on-life-support Google Web search. Embrace fakery. And who, you may ask, will teach me? The answer is The Daily Beast. To begin your life-changing journey, navigate to “Facebook Is Filled With AI-Generated Garbage—and Older Adults Are Being Tricked.”

image

Two government regulators wonder where the Deep Fakes have gone? Thanks, MSFT Copilot. Keep on updating, please.

The write up explains:

So far, the few experiments to analyze seniors’ AI perception seem to align with the Facebook phenomenon…. The team found that the older participants were more likely to believe that AI-generated images were made by humans.

Okay, that’s step one: Identify your target market.

What’s next? The write up points out:

scammers have wielded increasingly sophisticated generative AI tools to go after older adults. They can use deepfake audio and images sourced from social media to pretend to be a grandchild calling from jail for bail money, or even falsify a relative’s appearance on a video call.

That’s step two: Weave in a family or social tug on the heart strings.

Then what? The article helpfully notes:

As of last week, there are more than 50 bills across 30 states aimed to clamp down on deepfake risks. And since the beginning of 2024, Congress has introduced a flurry of bills to address deepfakes.

Yep, the flag has been dropped. The race with few or no rules is underway. But what about government rules and regulations? Yeah, those will be chugging around after the race cars have disappeared from view.

Thanks for the guidelines.

Stephen E Arnold, March 29, 2024

The Many Faces of Zuckbook

March 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

As evidenced by his business decisions, Mark Zuckerberg seems to be a complicated fellow. For example, a couple recent articles illustrate this contrast: On one hand is his commitment to support open source software, an apparently benevolent position. On the other, Meta is once again in the crosshairs of EU privacy advocates for what they insist is its disregard for the law.

First, we turn to a section of VentureBeat’s piece, “Inside Meta’s AI Strategy: Zuckerberg Stresses Compute, Open Source, and Training Data.” In it, reporter Sharon Goldman shares highlights from Meta’s Q4 2023 earnings call. She emphasizes Zuckerberg’s continued commitment to open source software, specifically AI software Llama 3 and PyTorch. He touts these products as keys to “innovation across the industry.” Sounds great. But he also states:

“Efficiency improvements and lowering the compute costs also benefit everyone including us. Second, open source software often becomes an industry standard, and when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products.”

Ah, there it is.

Our next item was apparently meant to be sneaky, but who did Meta think it was fooling? The Register reports, “Meta’s Pay-or-Consent Model Hides ‘Massive Illegal Data Processing Ops’: Lawsuit.” Meta is attempting to “comply” with the EU’s privacy regulations by making users pay to opt in to them. That is not what regulators had in mind. We learn:

“Those of us with aunties on FB or friends on Instagram were asked to say yes to data processing for the purpose of advertising – to ‘choose to continue to use Facebook and Instagram with ads’ – or to pay up for a ‘subscription service with no ads on Facebook and Instagram.’ Meta, of course, made the changes in an attempt to comply with EU law. But privacy rights folks weren’t happy about it from the get-go, with privacy advocacy group noyb (None Of Your Business), for example, sarcastically claiming Meta was proposing you pay it in order to enjoy your fundamental rights under EU law. The group already challenged Meta’s move in November, arguing EU law requires consent for data processing to be given freely, rather than to be offered as an alternative to a fee. Noyb also filed a lawsuit in January this year in which it objected to the inability of users to ‘freely’ withdraw data processing consent they’d already given to Facebook or Instagram.”

And now eight European Consumer Organization (BEUC) members have filed new complaints, insisting Meta’s pay-or-consent tactic violates the European General Data Protection Regulation (GDPR). While that may seem obvious to some, Meta insists it is in compliance with the law. Because of course it does.

Cynthia Murrell, March 29, 2024

Who Is Responsible for Security Problems? Guess, Please

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

“In my opinion, Zero-Days Exploited in the Wild Jumped 50% in 2023, Fueled by Spyware Vendors” is a semi-sophisticated chunk of content marketing and an example of information shaping. The source of the “report” is Google. The article appears in what was a Google- and In-Q-Tel-backed company publication. The company is named “Recorded Future” and appears to be owned in whole or in part by a financial concern. In a separate transaction, Google purchased a cyber security outfit called Mandiant which provides services to government and commercial clients. This is an interesting collection of organizations and each group’s staff of technical professionals.

image

The young players are arguing about whose shoulders will carry the burden of the broken window. The batter points to the fielder. The fielder points to the batter. Watching are the coaches and team mates. Everyone, it seems, is responsible. So who will the automobile owner hold responsible? That’s a job for the lawyer retained by the entity with the deepest pockets and an unfettered communications channel. Nice work MSFT Copilot. Is this scenario one with which you are familiar?

The article contains what seems to me quite shocking information; that is, companies providing specialized services to government agencies like law enforcement and intelligence entities, are compromising the security of mobile phones. What’s interesting is that Google’s Android software is one of the more widely used “enablers” of what is now a ubiquitous computing device.

I noted this passage:

Commercial surveillance vendors (CSVs) were the leading culprit behind browser and mobile device exploitation, with Google attributing 75% of known zero-day exploits targeting Google products as well as Android ecosystem devices in 2023 (13 of 17 vulnerabilities). [Emphasis added. Editor.]

Why do I find the article intriguing?

  1. This “revelatory” write up can be interpreted to mean that spyware vendors have to be put in some type of quarantine, possibly similar to those odd boxes in airports where people who smoke can partake of potentially harmful habit. In the special “room”, these folks can be monitored perhaps?
  2. The number of exploits parallels the massive number of security breaches create by widely-used laptop, desktop, and server software systems. Bad actors have been attacking for many years and now the sophistication and volume of cyber attacks seems to be increasing. Every few days cyber security vendors alert me to a new threat; for example, entering hotel rooms with Unsaflok. It seems that security problems are endemic.
  3. The “fix” or “remedial” steps involve users, government agencies, and industry groups. I interpret the argument as suggesting that companies developing operating systems need help and possibly cannot be responsible for these security problems.

The article can be read as a summary of recent developments in the specialized software sector and its careless handling of its technology. However, I think the article is suggesting that the companies building and enabling mobile computing are just victimized by bad actors, lousy regulations, and sloppy government behaviors.

Maybe? I believe I will tilt toward the content marketing purpose of the write up. The argument “Hey, it’s not us” is not convincing me. I think it will complement other articles that blur responsibility the way faces are blurred in some videos.

Stephen E Arnold, March 28, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta