AItoAI Interviews Connecticut Senator James Maroney

May 30, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

AItoAI: Smart Software for Government Uses Cases has published its interview with Senator James Maroney. Senator Maroney is the driving force behind legislation to regulate artificial intelligence in Connecticut. In the 20-minute interview, Senator Maroney elaborated on several facets of the proposed legislation. The interviewers were the father-and-son team of Erik S. (the son) and Stephen E Arnold (father).

james maroney

Senator James Maroney spearheaded the Connecticut artificial intelligence legislation.

Senator Maroney pointed to the rapid growth of AI products and services. That growth has economic implications for the citizens and businesses in Connecticut. The senator explained that biases in algorithms can have a negative impact. For that reason, specific procedures are required to help ensure that the AI systems operate in a fair way. To help address this issue, Senator Maroney advocates a risk-based approach to AI. The idea is that a low-risk AI service like getting information about a vacation requires less attention than a higher-risk application such as evaluating employee performance. The bill includes provisions for additional training. The senator’s commitment to upskilling links to taking steps to help citizens and organizations of all types use AI in a beneficial manner.

AItoAI wants to call attention to Senator Maroney’s making his time available for the interview. Erik and Stephen want to thank the senator for his time and his explanation of some of the bill’s provisions.

You can view the video at https://youtu.be/ZfcHKLgARJU or listen to the audio of the 20-minute program at https://shorturl.at/ziPgr.

Stephen E Arnold, May 30, 2024

Telegram: No Longer Just Mailing It In

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Allegedly about 900 million people “use” Telegram. More are going to learn about the platform as the company comes under more European Union scrutiny, kicks the tires for next-generation obfuscation technology, and become a best friend of Microsoft… for now. “Telegram Gets an In-App Copilot Bot” reports:

Microsoft has added an official Copilot bot within the messaging app Telegram, which lets users search, ask questions, and converse with the AI chatbot. Copilot for Telegram is currently in beta but is free for Telegram users on mobile or desktop. People can chat with Copilot for Telegram like a regular conversation on the messaging app. Copilot for Telegram is an official Microsoft bot (make sure it’s the one with the checkmark and the username @CopilotOfficialBot).

You can “try it now.” Just navigate to Microsoft “Copilot for Telegram.” At this location, you can:

Meet your new everyday AI companion: Copilot, powered by GPT, now on Telegram. Engage in seamless conversations, access information, and enjoy a smarter chat experience, all within Telegram.

image

A dinobaby lecturer explains the Telegram APIs and its bot function for automating certain operations within the Telegram platform. Some in the class are looking at TikTok, scrolling Instagram, or reading about a breakthrough in counting large numbers of objects using a unique numerical recipe. But Telegram? WhatsApp and Signal are where the action is, right? Thanks, MSFT Copilot. You are into security and now Telegram. Keep your focus, please.

Next week, I will deliver a talk about Telegram and some related information about obfuscated messaging at the TechnoSecurity & Digital Forensics Conference. I no longer do too many lectures because I am an 80 year old dinobaby, and I hate flying and standing around talking to people 50 years younger than I. However, my team’s research into end-to-end encrypted messaging yielded some interesting findings. At the 2024 US National Cyber Crime Conference about 260 investigators listened to my 75 minute talk, and a number of them said, “We did not know that.” I will also do a Telegram-centric lecture at another US government event in September. But in this short post, I want to cover what the “deal” with Microsoft suggests.

Let’s get to it.

Telegram operates out of Dubai. The distributed team of engineers has been adding features and functions to what began as a messaging app in Russia. The “legend” of Telegram is an interesting story, but I remain skeptical about the company, its links with a certain country, and the direction in which the firm is headed. If you are not familiar with the service, it has morphed into a platform with numerous interesting capabilities. For some actors, Telegram can and has replaced the Dark Web with Telegram’s services. Note: Messages on Telegram are not encrypted by default as they are on some other E2EE messaging applications. Examples include contraband, “personal” services, and streaming video to thousands of people. Some Telegram users pay to get “special” programs. (Please, use your imagination.)

Why is Telegram undergoing this shift from humble messaging app to a platform? Our research suggests that there are three reasons. I want to point out that Pavel Durov does not have a public profile on the scale of a luminary like Elon Musk or Sam AI-Man, but he is out an about. He conducted an “exclusive” and possibly red-herring discussion with Tucker Carlson in April 2024. After the interview, Mr. Pavlov took direct action to block certain message flows from Ukraine into Russia. That may be one reason: Telegram is actively steering information about Ukraine’s view of Mr. Putin’s special operation. Yep, freedom.

Are there others? Let me highlight three:

  1. Mr. Pavlov and his brother who allegedly is like a person with two PhDs see an opportunity to make money. The Pavlovs, however, are not hurting for cash.
  2. American messaging apps have been fat and lazy. Mr. Pavlov is an innovator, and he wants to make darned sure that he rungs rings around Signal, WhatsApp, and a number of other outfits. Ego? My team thinks that is part of Mr. Pavlov’s motivation.
  3. Telegram is expanding because it may not be an independent, free-wheeling outfit. Several on my team think that Mr. Pavlov answers to a higher authority. Is that authority aligned with the US? Probably not.

Now the Microsoft deal?

Several questions may get you synapses in gear:

  1. Where are the data flowing through Telegram located / stored geographically? The service can regenerate some useful information for a user with a new device.
  2. Why tout freedom and free speech in April 2024 and several weeks later apply restrictions on data flow? Does this suggest a capability to monitor by user, by content type, and by other metadata?
  3. Why is Telegram exploring additional network enhancements? My team thinks that Mr. Pavlov has some innovations in obfuscation planned. If the company does implement certain technologies freely disclosed in US patents, what will that mean for analysts and investigators?
  4. Why a tie up with Microsoft? Whose idea was this? Who benefits from the metadata? What happens if Telegram has some clever ideas about smart software and the Telegram bot function?

Net net: Not too many people in Europe’s regulatory entities have paid much attention to Telegram. The entities of interest have been bigger fish. Now Telegram is growing faster than a Chernobyl boar stuffed on radioactive mushrooms. The EU is recalibrating for Telegram at this time. In the US, the “I did not know” reaction provides some insight into general knowledge about Telegram’s more interesting functions. Think pay-to-view streaming video about certain controversial subjects. Free storage and data transfer is provided by Telegram, a company which does not embrace the Netflix approach to entertainment. Telegram is, as I explain in my lectures, interesting, very interesting.

Stephen E Arnold, May 29, 2024

AI Overviews: A He Said, She Said Argument

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Google has begun the process of setting up an AI Overview object in search results. The idea is that Google provides an “answer.” But the machine-generated response is a platform for selling sentences, “meaning,” and probably words. Most people who have been exposed to the Overview object point out some of the object’s flaws. Those “mistakes” are not the point. Before I offer some ideas about the advertising upside of an AI Overview, I want to highlight both sides of this “he said, she said” dust up. Those criticizing the Google’s enhancement to search results miss the point of generating a new way to monetize information. Those who are taking umbrage at the criticism miss the point of people complaining about how lousy the AI Overviews are perceived to be.

The criticism of Google is encapsulated in “Why Google Is (Probably) Stuck Giving Out AI Answers That May or May Not Be Right.” A “real” journalist explains:

What happens if people keep finding Bad Answers on Google and Google can’t whac-a-mole them fast enough? And, crucially, what if regular people, people who don’t spend time reading or talking about tech news, start to hear about Google’s Bad And Potentially Dangerous Answers? Because that would be a really, really big problem. Google does a lot of different things, but the reason it’s worth more than $2 trillion is still its two core products: search, and the ads that it generates alongside search results. And if people — normal people — lose confidence in Google as a search/answer machine … Well, that would be a real problem.

The idea is that the AI Overview makes Google Web search less useful than it was before AI. Whether the idea is accurate or not makes no difference to the “he said, she said” argument. The “real” news is that Google is doing something that many people may perceive as a negative. The consequence is that Google’s shiny carapace will be scratched and dented. A more colorful approach to this side of the “bad Google” argument appears in Android Authority. “Shut It Down: Google’s AI Search Results Are Beyond Terrible” states:

The new Google AI Overview feature is offering responses to queries that range from bizarre and funny to very dangerous.

Ooof. Bizarre and dangerous. Yep, that’s the new Google AI Overview.

The Red Alert Google is not taking the criticism well. Instead of Googzilla retreating into a dark, digital cave, the beastie is coming out fighting. Imagine. Google is responding to pundit criticism. Fifteen years ago, no one would have paid any attention to a podcaster writer and a mobile device news service. Times have indeed changed.

Google Scrambles to Manually Remove Weird AI Answers in Search” provides an allegedly accurate report about how Googzilla is responding to criticism. In spite of the split infinitive, the headline makes clear that the AI-infused online advertising machine is using humans (!) to fix up wonky AI Overviews. The write up pontificates:

Google continues to say that its AI Overview product largely outputs “high quality information” to users. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” Google spokesperson Meghann Farnsworth said in an email to The Verge. Farnsworth also confirmed that the company is “taking swift action” to remove AI Overviews on certain queries “where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

Google seems to acknowledge that action is required. But the Google is not convinced that it has stepped on a baby duckling or two with its AI Overview innovation.

image

AI Overviews represent a potential revenue flow into Alphabet. The money, not the excellence of the outputs, is what matters in today’s Google. Thanks, MSFT Copilot. Back online and working on security today?

Okay, “he said, she said.” What’s the bigger picture? I worked on a project which required setting up an ad service which sold words in a text passage. I am not permitted to name the client or the outfit with the idea. On a Web page, some text would appear with an identified like an underline or bold face. When the reader of the Web page clicked (often inadvertently) on the word, that user would be whisked to another Web site or a pop up ad. The idea is that instead of an Oingo (Applied Semantics)-type of related concept expansion, the advertiser was buying a word. Brilliant.

The AI Overview, based on my team’s look at what the Google has been crafting, sets up a similar opportunity. Here’s a selection from our discussion at lunch on Friday, May 24, 2024 at a restaurant which featured a bridge club luncheon. Wow, was it noisy? Here’s what emerged from our frequently disrupted conversation:

  1. The AI Overview is a content object. It sits for now at the top of the search results page unless the “user” knows to add the string udm=14 to a query
  2. Advertising can be “sold” to the advertiser[s] who want[s] to put a message on the “topic” or “main concept” of the search
  3. Advertising can be sold to the organizations wanting to be linked to a sentence or a segment of a sentence in the AI Overview
  4. Advertising can be sold to the organizations wanting to be linked to a specific word in the AI Overview
  5. Advertising can be sold to the organizations wanting to be linked to a specific concept in the AI Overview.

Whether the AI Overview is good, bad, or indifferent will make zero difference in practice to the Google advertising “machine,” its officers, and its soon-to-be replaced by smart software staff makes no, zero, zip difference. AI has given Google the opportunity to monetize a new content object. That content object and its advertising is additive. People who want “traditional” Google online advertising can still by it. Furthermore, as one of my team pointed out, the presence of the new content object “space” on a search results page opens up additional opportunities to monetize certain content types. One example is buying a link to a related video which appears as an icon below, along side, or within the content object space. The monetization opportunities seem to have some potential.

Net net: Googzilla may be ageing. To poobahs and self-appointed experts, Google may be lost in space, trembling in fear, and growing deaf due to the blaring of the Red Alert klaxons. Whatever. But the AI Overview may have some upside even if it is filled with wonky outputs.

Stephen E Arnold, May 29, 2024

Copilot: I Have Control Now, Captain. Relax, Chill

May 29, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Appearing unbidden on Windows devices, Copilot is spreading its tendrils through businesses around the world. Like a network of fungal mycorrhizae, the AI integrates itself with the roots of Windows computing systems. The longer it is allowed to intrude, the more any attempt to dislodge it will harm the entire ecosystem. VentureBeat warns, “Ceding Control: How Copilot+ and PCs Could Make Enterprises Beholden to Microsoft.”

Writer James Thomason traces a gradual transition: The wide-open potential of the early Internet gave way to walled gardens, the loss of repair rights, and a shift to outside servers controlled by cloud providers. We have gradually ceded control of both software and hardware as well as governance of our data. All while tech companies make it harder to explore alternative products and even filter our news, information, and Web exploration.

Where does that put us now? AI has ushered in a whole new level of dominion for Microsoft in particular. Thomason writes:

“Microsoft’s recently announced ‘Copilot+ PCs’ represent the company’s most aggressive push yet towards an AI-driven, cloud-dependent computing model. These machines feature dedicated AI processors, or ‘NPUs’ (neural processing units), capable of over 40 trillion operations per second. This hardware, Microsoft claims, will enable ‘the fastest, most intelligent Windows PC ever built.’ But there’s a catch: the advanced capabilities of these NPUs are tightly tethered to Microsoft’s cloud ecosystem. Features like ‘Recall,’ which continuously monitors your activity to allow you to quickly retrieve any piece of information you’ve seen on your PC, and ‘Cocreator,’ which uses the NPU to aid with creative tasks like image editing and generation, are deeply integrated with Microsoft’s servers. Even the new ‘Copilot’ key on the keyboard, which summons the AI assistant, requires an active internet connection. In effect, these PCs are designed from the ground up to funnel users into Microsoft’s walled garden, where the company can monitor, influence and ultimately control the user experience to an unprecedented degree. This split-brain model, with core functionality divided between local hardware and remote servers, means you never truly own your PC. Purchasing one of these AI-driven machines equals irrevocable subjugation to Microsoft’s digital fiefdom. The competition, user choice and ability to opt out that defined the PC era are disappearing before our eyes.”

So what does this mean for the majority businesses that rely on Microsoft products? Productivity gains, yes, but at the price of a vendor stranglehold, security and compliance risks, and opaque AI decision-making. See the article for details on each of these.

For anyone who doubts Microsoft would be so unethical, the write-up reminds us of the company’s monopolistic tendencies. Thomason insists we cannot count on the government to intervene again, considering Big Tech’s herculean lobbying efforts. So if the regulators are not coming to save us, how can we defy Microsoft dominance? One can expend the effort to find and utilize open hardware and software alternatives, of course. Linux is a good example. But a real difference will only be made with action on a larger scale. There is an organization for that: FUTO (the Fund for Universal Technology Openness). We learn:

“One of FUTO’s key strategies is to fund open-source versions of important technical building blocks like AI accelerators, ensuring they remain accessible to a wide range of actors. They’re also working to make decentralized software as user-friendly and feature-rich as the offerings of the tech giants, to reduce the appeal of convenience-for-control tradeoffs.”

Even if and when those building blocks are available, resistance will be a challenge. It will take mindfulness about technology choices while Microsoft dangles shiny, easier options. But digital freedom, Thomason asserts, is well worth the effort.

Cynthia Murrell, May 29, 2024

French AI Is Intelligent and Not Too Artificial

May 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Macron: French AI Can Challenge Insane Dominance of US and China.” In the CNBC interview, Emmanuel Macron used the word “insane.” The phrase, according to the cited article was:

French President Emmanuel Macron has called for his country’s AI leaders to challenge the “insane” dominance of US and Chinese tech giants.

French offers a number of ways to explain a loss of mental control or something that goes well beyond normal behaviors; for example, aliéné which can suggest something quite beyond the normal. The example which comes to mind might include the market dominance of US companies emulating Google-type methods. Another choice is comme un fou. This phrase suggests a crazy high speed action or event; for example, the amount of money OpenAI generated by selling $20 subscriptions to ChatGPTo iPhone app in a few days. My personal favorite is dément which has a nice blend of demented behavior and incredible actions. Microsoft’s recent litany of AI capabilities creating a new category of computers purpose-built to terminate with extreme prejudice the market winner MacBook devices; specifically, the itty bitty Airs.

image

The road to Google-type AI has a few speed bumps. Thanks, MSFT Copilot. Security getting attention or is Cloud stability the focal point of the day?

The write up explains what M. Macron really meant:

For now, however, Europe remains a long way behind the US and Chinese leaders. None of the 10 largest tech companies by market cap are based in the continent and few feature in the top 50. The French President decried that landscape. “It’s insane to have a world where the big giants just come from China and US.”

Ah, ha. The idea appears to be a lack of balance and restraint. Well, it seems, France is going to do its best to deliver the digital equivalent of a chicken with a Label Rouge; that is, AI that is going to meet specific standards and be significantly superior to something like the $5 US Costco chicken. I anticipate that M. Macron’s government will issue a document like this Fiche filière volaille de chair 2020 for AI.

M. Macron points to two examples of French AI technology: Mistral and H (formerly Holistic). I was disappointed that M. Macron did not highlight the quite remarkable AI technology of Preligens, which is in the midst of a sale. I would suggest that Preligens is an example of why the “insane”  dominance of China and the US in AI is the current reality. The company is ensnared in French regulations and in need of the type of money pumped into AI start ups in the two countries leading the pack in AI.

M. Macron is making changes; specifically, according to the write up:

Macron has cut red tape, loosened labor protections, and reduced taxes on the wealthy. He’s also attracted foreign investment, including a €15bn funding package from the likes of Microsoft and Amazon announced earlier this month. Macron has also committed to a pan-European AI strategy. At a meeting in the  Elysée Palace this week, he hinted at the first step of a new plan: “Our aim is to Europeanize [AI], and we’re going to start with a Franco-German initiative.”

I know from experience the quality of French information-centric technologists. The principal hurdles for France are, in my opinion, are:

  1. Addressing the red tape. (One cannot grasp the implications of this phrase unless one tries to rent an apartment in France.)
  2. Juicing up the investment system and methods.
  3. Overcoming the ralentisseurs on the Information Superhighway running between Paris, DC, and Beijing.

Net net: Check out Preligens.

Stephen E Arnold, May 28, 2024

Big Tech and AI: Trust Us. We Just Ooze Trust

May 28, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Amid rising concerns, The Register reports, “Top AI Players Pledge to Pull the Plug on Models that Present Intolerable Risk” at the recent AI Seoul Summit. How do they define “intolerable?” That little detail has yet to be determined. The non-binding declaration was signed by OpenAI, Anthropic, Microsoft, Google, Amazon, and other AI heavyweights. Reporter Laura Dobberstein writes:

“The Seoul Summit produced a set of Frontier AI Safety Commitments that will see signatories publish safety frameworks on how they will measure risks of their AI models. This includes outlining at what point risks become intolerable and what actions signatories will take at that point. And if mitigations do not keep risks below thresholds, the signatories have pledged not to ‘develop or deploy a model or system at all.’”

We also learn:

“Signatories to the Seoul document have also committed to red-teaming their frontier AI models and systems, sharing information, investing in cyber security and insider threat safeguards in order to protect unreleased tech, incentivizing third-party discovery and reporting of vulnerabilities, AI content labelling, prioritizing research on the societal risks posed by AI, and to use AI for good.”

Promises, promises. And where are these frameworks so we can hold companies accountable? Hang tight, the check is in the mail. The summit produced a document full of pretty words, but as the article notes:

“All of that sounds great … but the details haven’t been worked out. And they won’t be, until an ‘AI Action Summit’ to be staged in early 2025.”

If then. After all, there’s no need to hurry. We are sure we can trust these AI bros to do the right thing. Eventually. Right?

Cynthia Murrell, May 28, 2024

Bullying Google Is a Thing

May 24, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Imagine the smartest kid in the fifth grade. The classmates are not jealous, but they are keenly aware of the brightest star having an aloof, almost distracted attitude. Combine that with a credit in a TV commercial when the budding wizard was hired to promote an advanced mathematics course developed by the child’s mother and father. The blessed big brain finds itself the object of ridicule. The PhD parents, the proud teacher, and the child’s tutor who works at Lawrence Livermore National Laboratory cannot understand why the future Master of the Universe is being bullied. Remarkable, is it not?

5 23 googzilla nobody fears

Herewith is an illustration of a fearsome creature, generated in gloomy colors, by the MidJourney bot, roaring its superiority. However, those observing the Big Boy are convulsed with laughter. Why laugh at an ageing money machine with big teeth?

I read “Google’s AI Search Feature Suggested Using Glue to Keep Cheese Sticking to a Pizza.” Yep fourth grade bullying may be part of the poking and prodding of a quite hapless but wealthy, successful Googzilla. Here’s an example of the situation in which the Google, which I affectionately call “Googzilla,” finds itself:

Google’s new search feature, AI Overviews, seems to be going awry. The tool, which gives AI-generated summaries of search results, appeared to instruct a user to put glue on pizza when they searched "cheese not sticking to pizza."

In another write up, Business Insider asserted:

But in searches shared on X, users have gotten contradictory instructions on boiling taro and even been encouraged to run with scissors after the AI appeared to take a joke search seriously. When we asked whether a dog had ever played in the NHL, Google answered that one had, apparently confused by a charity event for rescue pups.

My reaction to this digital bullying is mixed. On one hand, Google has demonstrated that its Code Red operating mode is cranking out half-cooked pizza. Sure, the pizza may have some non-poisonous glue, but Google is innovating. A big event provided a platform for the online advertising outfit to proclaim, “We are the leaders in smart software.” On the other hand, those observing Google’s outputs find the beastie a follower; for example, OpenAI announced ChatGPT4o the day before Google’s “reveal.” Then Microsoft presented slightly more coherent applications using AI, including the privacy special service which records everything a person does on a reinvented Windows on Arm device.

Several observations are warranted:

  1. Googzilla finds itself back in grade school with classmates of lesser ability, wealth, and heritage making fun of the entity. Wow, remember the shame? Remember the fun one had poking fun at an outsider? Humans are wonderful, are they not?
  2. “Users” or regular people who rely on Google seem to have a pent up anger with the direction in which Googzilla has been going. Since the company does not listen to its “users,” calling attention to Googzilla’s missteps is an easy way to say, “Hey, Big Fella, you are making us unhappy.” Will Google pay attention to these unexpected signals?
  3. Google, the corporate entity, seems to be struggling with Management 101 tasks; for example, staff or people resources. The CFO is heading to the exit. Competition, while flawed in some ways, continues to nibble at Google’s advertising perpetual motion machine. Google innovation focuses on gamesmanship and trying to buy digital marketing revenue.

Net net: I anticipate more coverage of Google’s strategy and tactical missteps. The bullying will continue and probably grow unless the company puts on its big boy pants and neutralizes the school yard behavior its critics and cynics deliver.

Stephen E Arnold, May 24, 2024

Silicon Valley and Its Bad Old Days? You Mean Today Days I Think

May 23, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Sam AI-Man knows how to make headlines. I wonder if he is aware of his PR prowess. The kids in Redmond tried their darnedest to make the “real” media melt down with an AI PC. And what happens? Sam AI-Man engages in the Scarlett Johansson voice play. Now whom does one believe? Did Sam AI-Man take umbrage at Ms. Johansson’s refusal to lend her voice to ChatGPTo? Did she recognize an opportunity to convert the digital voice available on ChatGPTo as “hers” fully aware she could become a household name. Star power may relate to visibility in the “real” media, not the wonky technology blogs.

image

It seems to be a mess, doesn’t it?  Thanks, MSFT Copilot. What happened to good, old Bing, DuckDuckGo, and other services on the morning of May 23, 2024. Oh, well, the consequences of close enough for horseshoes thinking perhaps?

And how do I know the dust up is “real”? There’s the BBC’s story “Scarlett Johansson’s AI Row Has Echoes of Silicon Valley’s Bad Old Days.” I will return to this particularly odd write up in a moment. Also there is the black hole of money (the estimable Washington Post) and its paywalled story “Scarlett Johansson Says OpenAI Copied Her Voice after She Said No.” Her is the title of a Hollywood type movie, not a TikTok confection.

Let’s look at the $77 million in losses outfit’s story first. The WaPo reports:

In May, two days before OpenAI planned to demonstrate the technology, Altman contacted her again, asking her to reconsider, she said. Before she could respond, OpenAI released a demo of its improved audio technology, featuring a voice called “Sky.” Many argued the coquettish voice — which flirted with OpenAI employees in the presentation — bore an uncanny resemblance to Johansson’s character in the 2013 movie “Her,” in which she performed the voice of a super-intelligent AI assistant. “When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson wrote. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human,” she added.

I am not sure that an AI could improve this tight narrative. (We won’t have to wait long before AI writes WaPo stories I have heard. Why? Maybe $77 million in losses?

Now let’s look at the BBC’s article with the reference to “bad old days.” The write up reports:

“Move fast and break things” is a motto that continues to haunt the tech sector, some 20 years after it was coined by a young Mark Zuckerberg. Those five words came to symbolize Silicon Valley at its worst – a combination of ruthless ambition and a rather breathtaking arrogance – profit-driven innovation without fear of consequence. I was reminded of that phrase this week when the actor Scarlett Johansson clashed with OpenAI.

Sam AI-Man’s use of a digital voice which some assert “sounds” like Ms. Johansson’s voice is a reminder of the “bad old days.” One question: When did the Silicon Valley “bad old days” come to an end?

Opportunistic tactics require moving quickly. Whether something is broken or not is irrelevant. Look at Microsoft. Once again Sam AI-Man was able to attract attention. Google’s massive iteration of the technological equivalent of herring nine ways found itself “left of bang.” Sam AI-Man announced ChatGPTo the day before the Sundar & Prabakar In and Out Review.

Let’s summarize:

  1. Sam AI-Man got publicity by implementing an opportunistic tactic. Score one for the AI-Man
  2. Ms. Johansson scored one because she was in the news and she may have a legal play, but that will take months to wend its way through the US legal system
  3. Google and Microsoft scored zero. Google played second fiddle to the ChatGPTo thing and Microsoft was caught in exhaust of the Sam AI-Man voice blast.

Now when did the “bad old days” of Silicon Valley End? Exactly never. It is so easy to say, “I’m sorry. So sorry.”

Stephen E Arnold, May 23, 2024

AI and Work: Just the Ticket for Monday Morning

May 20, 2024

dinosaur30aThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Well, here’s a cheerful essay for the average worker in a knowledge industry. “If Your Work’s Average, You’re Screwed It’s Over for You” is the ideal essay to kick off a new work week. The source of the publication is Digital Camera World. I thought traditional and digital cameras were yesterday’s news. Therefore, I surmise the author of the write up misses the good old days of Kodak film, chemicals, and really expensive retouching.

image

How many US government professionals will find themselves victims of good enough AI? Answer: More than than the professional photographers? Thanks, MSFT Copilot. Good enough, a standard your security systems seem to struggle to achieve.

What’s the camera-focuses (yeah, lame pun) essay report. Consider this passage:

there’s one thing that only humans can do…

Okay, one thing. I give up. What’s that? Create other humans? Write poetry? Take fentanyl and lose the ability to stand up for hours? Captain a boat near orcas who will do what they can to sink the vessel? Oh, well. What’s that one thing?

"But I think the thing that AI is going to have an impossible job of achieving is that last 1% that stands between everything [else] and what’s great. I think that that last 1%, only a human can impart that.

AI does the mediocre. Humans, I think, do the exceptional. The logic seems to point to someone in the top tier of humans will have a job. Everyone else will be standing on line to get basic income checks, pursuing crime, or reading books. Strike that. Scrolling social media. No doom required. Those not in the elite will know doom first hand.

Here’s another passage to bring some zip to a Monday morning:

What it’s [smart software] going to do is, if your work’s average, you’re screwed. It’s [having a job] over for you. Be great, because AI is going to have a really hard time being great itself.

Observations? Just that cost cutting may be Job One.

Stephen E Arnold, May 20, 2024

Hoot Hoot Hoot: A Xoogler Pushes the Help Button

May 20, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

The Daily Express US (?) published a remarkable story: “Former Google VP Issues Horror AI Warning As Technology Set to Leave Millions Jobless.” That’s a catchy assertion. Who is the Xoogler (that’s a former Googler for those who don’t know) that is mashing the Redder Alert siren? It is Geoffrey Hinton, who is a Big Wheel in the Land of AI.

image

Like a teacher with an out-of-control class, help is needed. Unfortunately pressing the big red button is performative. It is too late to get the class under control. Does AI behave like these kids? Thanks, MSFT Copilot. Good enough.

He believes that some entity has to provide a universal basic income to those people who are unable to find work because AI ate their jobs. The acronym UBI in the vernacular of a dinobaby means welfare. But those younger than I will interpret the UBI idea as something that “they” must provide.

The write up quotes the computer and AI wizard as opining:

"If you pay everybody a universal basic income, that solves the problem of them starving and not being able to pay the rent but that doesn’t solve the self-respect problem."

I like the reference to self-respect. I have not encountered too many examples in the last day or so. I have choked off the flood of “information” about the assorted trials of a former elected official, the hooligan trashing of Macy stores, and the arrest and un-arrest of a certain celebrity golfer. That’s enough of the self-respect thing for me.

The write up continues:

He added: "I am very worried about AI taking over lots of mundane jobs. That should be a good thing. It’s going to lead to a big increase in productivity, which leads to a big increase in wealth, and if that wealth was equally distributed that would be great, but it’s not going to be. "In the systems we live in, that wealth is going to go to the rich and not to the people whose jobs get lost, and that’s going to be very bad for society, I believe. It’s going to increase the gap between rich and poor, which increases the chances of right-wing populists getting elected."

Okay, that’s an interesting moment of insight from one of the people who assisted in the creation of this sprint to societal change.

I find it interesting that technology marches forward in a way that prevents smart people from peering down the road from a vantage point defined by their computer monitor and lab partners. The bird’s-eye view of a technology like AI is of interest only when the individual steps away from a Google-type outfit.

AI can hallucinate. I think it is clear that the wizards “inventing” smart software also hallucinate within their digital constructs.

What happens when the hallucinogenic wears off? For Dr. Hinton it is time to call for help. I assume the UBI help will arrive from “the government.” Will “the government” listen, get organized, and take action. Dr. Hinton, like some smart software, might be experiencing what some of his AI colleagues call hallucinating. Am I surprised? Nope. Wizards are quirky.

Stephen E Arnold, May 20, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta