Angling to Land the Big Google Fish: A Humblebrag Quest to Be CEO?
April 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My goodness, the staff and alums of DeepMind have been in the news. Wherever there are big bucks or big buzz opportunities, one will find the DeepMind marketing machinery. Consider “Can Demis Hassabis Save Google?” The headline has two messages for me. The first is that a “real” journalist things that Google is in big trouble. Big trouble translates to stakeholder discontent. That discontent means it is time to roll in a new Top Dog. I love poohbahing. But opining that the Google is in trouble. Sure, it was aced by the Microsoft-OpenAI play not too long ago. But the Softies have moved forward with the Mistral deal and the mysterious Inflection deal . But the Google has money, market share, and might. Jake Paul can say he wants the Mike Tyson death stare. But that’s an opinion until Mr. Tyson hits Mr. Paul in the face.
The second message in the headline that one of the DeepMind tribe can take over Google, defeat Microsoft, generate new revenues, avoid regulatory purgatory, and dodge the pain of its swinging door approach to online advertising revenue generation; that is, people pay to get in, people pay to get out, and soon will have to subscribe to watch those entering and exiting the company’s advertising machine.
Thanks, MSFT Copilot. Nice fish.
What are the points of the essay which caught my attention other than the headline for those clued in to the Silicon Valley approach to “real” news? Let me highlight a few points.
First, here’s a quote from the write up:
Late on chatbots, rife with naming confusing, and with an embarrassing image generation fiasco just in the rearview mirror, the path forward won’t be simple. But Hassabis has a chance to fix it. To those who known him, have worked alongside him, and still do — all of whom I’ve spoken with for this story — Hassabis just might be the perfect person for the job. “We’re very good at inventing new breakthroughs,” Hassabis tells me. “I think we’ll be the ones at the forefront of doing that again in the future.”
Is the past a predictor of future success? More than lab-to-Android is going to be required. But the evaluation of the “good at inventing new breakthroughs” is an assertion. Google has been in the me-too business for a long time. The company sees itself as a modern Bell Labs and PARC. I think that the company’s perception of itself, its culture, and the comments of its senior executives suggest that the derivative nature of Google is neither remembered nor considered. It’s just “we’re very good.” Sure “we” are.
Second, I noted this statement:
Ironically, a breakthrough within Google — called the transformer model — led to the real leap. OpenAI used transformers to build its GPT models, which eventually powered ChatGPT. Its generative ‘large language’ models employed a form of training called “self-supervised learning,” focused on predicting patterns, and not understanding their environments, as AlphaGo did. OpenAI’s generative models were clueless about the physical world they inhabited, making them a dubious path toward human level intelligence, but would still become extremely powerful. Within DeepMind, generative models weren’t taken seriously enough, according to those inside, perhaps because they didn’t align with Hassabis’s AGI priority, and weren’t close to reinforcement learning. Whatever the rationale, DeepMind fell behind in a key area.
Google figured something out and then did nothing with the “insight.” There were research papers and chatter. But OpenAI (powered in part by Sam AI-Man) used the Google invention and used it to carpet bomb, mine, and set on fire Google’s presumed lead in anything related to search, retrieval, and smart software. The aftermath of the Microsoft OpenAI PR coup is a continuing story of rehabilitation. From what I have seen, Google needs more time getting its ageingbody parts working again. The ad machine produces money, but the company reels from management issue to management issue with alarming frequency. Biased models complement spats with employees. Silicon Valley chutzpah causes neurological spasms among US and EU regulators. Something is broken, and I am not sure a person from inside the company has the perspective, knowledge, and management skills to fix an increasingly peculiar outfit. (Yes, I am thinking of ethnically-incorrect German soldiers loyal to a certain entity on Google’s list of questionable words and phrases.)
And, lastly, let’s look at this statement in the essay:
Many of those who know Hassabis pine for him to become the next CEO, saying so in their conversations with me. But they may have to hold their breath. “I haven’t heard that myself,” Hassabis says after I bring up the CEO talk. He instantly points to how busy he is with research, how much invention is just ahead, and how much he wants to be part of it. Perhaps, given the stakes, that’s right where Google needs him. “I can do management,” he says, ”but it’s not my passion. Put it that way. I always try to optimize for the research and the science.”
I wonder why the author of the essay does not query Jeff Dean, the former head of a big AI unit in Mother Google’s inner sanctum about Mr. Hassabis? How about querying Mr. Hassabis’ co-founder of DeepMind about Mr. Hassabis’ temperament and decision-making method? What about chasing down former employees of DeepMind and getting those wizards’ perspective on what DeepMind can and cannot accomplish.
Net net: Somewhere in the little-understood universe of big technology, there is an invisible hand pointing at DeepMind and making sure the company appears in scientific publications, the trade press, peer reviewed journals, and LinkedIn funded content. Determining what’s self-delusion, fact, and PR wordsmithing is quite difficult.
Google may need some help. To be frank, I am not sure anyone in the Google starting line up can do the job. I am also not certain that a blue chip consulting firm can do much either. Google, after a quarter century of zero effective regulation, has become larger than most government agencies. Its institutional mythos creates dozens of delusional Ulysses who cannot separate fantasies of the lotus eaters from the gritty reality of the company as one of the contributors to the problems facing youth, smaller businesses, governments, and cultural norms.
Google is Googley. It will resist change.
Stephen E Arnold, April 3, 2024
Open Source Software: Fool Me Once, Fool Me Twice, Fool Me Once Again
April 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Open source is shoved in my face each and every day. I nod and say, “Sure” or “Sounds on point”. But in the back of my mind, I ask myself, “Am I the only one who sees open source as a way to demonstrate certain skills, a Hail, Mary, in a dicey job market, or a bit of MBA fancy dancing. I am not alone. Navigate to “Software Vendors Dump Open Source, Go for Cash Grab.” The write up does a reasonable job of explaining the open source “playbook.”
The write up asserts:
A company will make its program using open source, make millions from it, and then — and only then — switch licenses, leaving their contributors, customers, and partners in the lurch as they try to grab billions.
Yep, billions with a “B”. I think that the goal may be big numbers, but some open source outfits chug along ingesting venture funding and surfing on assorted methods of raising cash and never really get into “B” territory. I don’t want to name names because as a dinobaby, the only thing I dislike more than doctors is a legal eagle. Want proper nouns? Sorry, not in this blog post.
Thanks, MSFT Copilot. Where are you in the open source game?
The write up focuses on Redis, which is a database that strikes me as quite similar to the now-forgotten Pinpoint approach or the clever Inktomi method to speed up certain retrieval functions. Well, Redis, unlike Pinpoint or Inktomi is into the “B” numbers. Two billion to be semi-exact in this era of specious valuations.
The write up says that Redis changed its license terms. This is nothing new. 23andMe made headlines with some term modifications as the company slowly settled to earth and landed in a genetically rich river bank in Silicon Valley.
The article quotes Redis Big Dogs as saying:
“Beginning today, all future versions of Redis will be released with source-available licenses. Starting with Redis 7.4, Redis will be dual-licensed under the Redis Source Available License (RSALv2) and Server Side Public License (SSPLv1). Consequently, Redis will no longer be distributed under the three-clause Berkeley Software Distribution (BSD).”
I think this means, “Pay up.”
The author of the essay (Steven J. Vaughan-Nichols) identifies three reasons for the bait-and-switch play. I think there is just one — money.
The big question is, “What’s going to happen now?”
The essay does not provide an answer. Let me fill the void:
- Open source will chug along until there is a break out program. Then whoever has the rights to the open source (that is, the one or handful of people who created it) will look for ways to make money. The software is free, but modules to make it useful cost money.
- Open source will rot from within because “open” makes it possible for bad actors to poison widely used libraries. Once a big outfit suffers big losses, it will be hasta la vista open source and “Hello, Microsoft” or whoever the accountants and lawyers running the company believe care about their software.
- Open source becomes quasi-commercial. Options range from Microsoft charging for GitHub access to an open source repository becoming a membership operation like a digital Mar-A-Lago. The “hosting” service becomes the equivalent of a golf course, and the people who use the facilities paying fees which can vary widely and without any logic whatsoever.
Which of these three predictions will come true? Answer: The one that affords the breakout open source stakeholders to generate the maximum amount of money.
Stephen E Arnold, April 1, 2024
Commercial Open Source: Fantastic Pipe Dream or Revenue Pipe Line?
March 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Open source is a term which strikes me as au courant. Artificial intelligence software is often described as “open source.” The idea has a bit of “do good” mixed with the idea that commercial software puts customers in handcuffs. (I think I hear Kumbaya playing faintly in the background.) Is it possible to blend the idea of free and open software with the principles of commercial software lock in? Notable open source entrepreneurs have become difficult to differentiate from a run-of-the-mill technology company. Examples include RedHat, Elastic, and OpenAI. Ooops. Sorry. OpenAI is a different type of company. I think.
Will open source software, particularly open source AI components, end up like this private playground? Thanks, MSFT Copilot. You are into open source, aren’t you? I hope your commitment is stronger than for server and cloud security.
I had these open source thoughts when I read “AI and Data Infrastructure Drives Demand for Open Source Startups.” The source of the information is Runa Capital, now located in Luxembourg. The firm publishes a report called the Runa Open Source Start Up Index, and it is a “rosy” document. The point of the article is that Runa sees open source as a financial opportunity. You can start your exploration of the tables and charts at this link on the Runa Capital Web site.
I want to focus on some information tucked into the article, just not presented in bold face or with a snappy chart. Here’s the passage I noted:
Defining what constitutes “open source” has its own inherent challenges too, as there is a spectrum of how “open source” a startup is — some are more akin to “open core,” where most of their major features are locked behind a premium paywall, and some have licenses which are more restrictive than others. So for this, the curators at Runa decided that the startup must simply have a product that is “reasonably connected to its open-source repositories,” which obviously involves a degree of subjectivity when deciding which ones make the cut.
The word “reasonably” invokes an image of lawyers negotiating on behalf of their clients. Nothing is quite so far from the kumbaya of the “real” open source software initiative as lawyers. Just look at the licenses for open source software.
I also noted this statement:
Thus, according to Runa’s methodology, it uses what it calls the “commercial perception of open-source” for its report, rather than the actual license the company attaches to its project.
What is “open source”? My hunch it is whatever the lawyers and courts conclude.
Why is this important?
The talk about “open source” is relevant to the “next big thing” in technology. And what is that? ANSWER: A fresh set of money making plays.
I know that there are true believers in open source. I wish them financial and kumbaya-type success.
My take is different: Open source, as the term is used today, is one of the phrases repurposed to breathe life in what some critics call a techno-feudal world. I don’t have a dog in the race. I don’t want a dog in any race. I am a dinobaby. I find amusement in how language becomes the Teflon on which money (one hopes) glides effortlessly.
And the kumbaya? Hmm.
Stephen E Arnold, March 26, 2024
AI Innovation: Do Just Big Dogs Get the Fat, Farmed Salmon?
March 20, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Let’s talk about statements like “AI will be open source” and “AI has spawned hundreds, if not thousands, of companies.” Those are assertions which seem to be slightly different from what’s unfolding at some of the largest technology outfits in the world. The circling and sniffing allegedly underway between the Apple and the Google pack is interesting. Apple and Google have a relationship, probably one that will need marriage counselor, but it is a relationship.
The wizard scientists have created an interesting digital construct. Thanks, MSFT Copilot. How are you coming along with your Windows 11 updates and Azure security today? Oh, that’s too bad.
The news, however, is that Microsoft is demonstrating that it wants to eat the fattest salmon in the AI stream. Microsoft has a deal of some type with OpenAI, operating under the steady hand of Sam AI-Man. Plus the Softies have cozied up to the French outfit Mistral. Today at 530 am US Eastern I learned that Microsoft has embraced an outstanding thinker, sensitive manager, and pretty much the entire Inflection AI outfit.
The number of stories about this move reflect the interest in smart software and what may be one of world’s purveyor of software which attracts bad actors from around the world. Thinking about breaches in the new Microsoft world is not a topic in the write ups about this deal. Why? I think the management move has captured attention because it is surprising, disruptive, and big in terms of money and implications.
“Microsoft Hires DeepMind Co-Founder Suleyman to Run Consumer AI” states:
DeepMind workers complained about his [former Googler Mustafa Suleyman and subsequent Inflection.ai senior manager] management style, the Financial Times reported. Addressing the complaints at the time, Suleyman said: “I really screwed up. I was very demanding and pretty relentless.” He added that he set “pretty unreasonable expectations” that led to “a very rough environment for some people. I remain very sorry about the impact that caused people and the hurt that people felt there.” Suleyman was placed on leave in 2019 and months later moved to Google, where he led AI product management until exiting in 2022.
Okay, a sensitive manager learns from his mistakes joins Microsoft.
And Microsoft demonstrates that the AI opportunity is wide open. “Why Microsoft’s Surprise Deal with $4 Billion Startup Inflection Is the Most Important Non-Acquisition in AI” states:
Even since OpenAI launched ChatGPT in November 2022, the tech world has been experiencing a collective mania for AI chatbots, pouring billions of dollars into all manner of bots with friendly names (there’s Claude, Rufus, Poe, and Grok — there’s event a chatbot name generator). In January, OpenAI launched a GPT store that’s chock full of bots. But how much differentiation and value can these bots really provide? The general concept of chatbots and copilots is probably not going away, but the demise of Pi may signal that reality is crashing into the exuberant enthusiasm that gave birth to a countless chatbots.
Several questions will be answered in the weeks ahead:
- What will regulators in the EU and US do about the deal when its moving parts become known?
- How will the kumbaya evolve when Microsoft senior managers, its AI partners, and reassigned Microsoft employees have their first all-hands Teams or off-site meeting?
- Does Microsoft senior management have the capability of addressing the attack surface of the new technologies and the existing Microsoft software?
- What happens to the AI ecosystem which depends on open source software related to AI if Microsoft shifts into “commercial proprietary” to hit revenue targets?
- With multiple AI systems, how are Microsoft Certified Professional agents going to [a] figure out what broke and [b] how to fix it?
- With AI the apparent “next big thing,” how will adversaries like nations not pals with the US respond?
Net net: How unstable is the AI ecosystem? Let’s ask IBM Watson because its output is going to be as useful as any other in my opinion. My hunch is that the big dogs will eat the fat, farmed salmon. Who will pull that lucious fish from the big dog’s maw? Not me.
Stephen E Arnold, March 20, 2024
A Single Google Gem for March 19, 2024
March 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I want to focus on what could be the star sapphire of Googledom. The story appeared on the estimable Murdoch confection Fox News. Its title? “Is Google Too Broken to Be Fixed? Investors Deeply Frustrated and Angry, Former Insider Warns”? The word choice in this Googley headline signals the alert reader that the Foxy folks have a juicy story to share. “Broken,” “Frustrated,” “Angry,” and “Warns” suggest that someone has identified some issues at the beloved Google.
A Google gem. Thanks, MSFT Copilot Bing thing. How’s the staff’s security today?
The write up states:
A former Google executive [David Friedberg] revealed that investors are “deeply frustrated” that the scandal surrounding their Gemini artificial intelligence (AI) model is becoming a “real threat” to the tech company. Google has issued several apologies for Gemini after critics slammed the AI for creating “woke” content.
The Xoogler, in what seems to be tortured prose, allegedly said:
“The real threat to Google is more so, are they in a position to maintain their search monopoly or maintain the chunk of profits that drive the business under the threat of AI? Are they adapting? And less so about the anger around woke and DEI,” Friedberg explained. “Because most of the investors I spoke with aren’t angry about the woke, DEI search engine, they’re angry about the fact that such a blunder happened and that it indicates that Google may not be able to compete effectively and isn’t organized to compete effectively just from a consumer competitiveness perspective,” he continued.
The interesting comment in the write up (which is recycled podcast chatter) seems to be:
Google CEO Sundar Pichai promised the company was working “around the clock” to fix the AI model, calling the images generated “biased” and “completely unacceptable.”
Does the comment attributed to a Big Dog Microsoftie reflect the new perception of the Google. The Hindustan Times, which should have radar tuned to the actions, of certain executives with roots entwined in India reported:
Satya Nadella said that Google “should have been the default winner” of Big Tech’s AI race as the resources available to it are the maximum which would easily make it a frontrunner.
My interpretation of this statement is that Google had a chance to own the AI casino, roulette wheel, and the croupiers. Instead, Google’s senior management ran over the smart squirrel with the Paris demonstration of the fantastic Bard AI system, a series of me-too announcements, and the outputting of US historical scenes with people of color turning up in what I would call surprising places.
Then the PR parade of Google wizards explains the online advertising firm’s innovations in playing games, figuring out health stuff (shades of IBM Watson), and achieving quantum supremacy in everything. Well, everything except smart software. The predicament of the ad giant is illuminated with the burning of billions in market cap coincident with the wizards’ flubs.
Net net: That’s a gem. Google losing a game it allegedly owned. I am waiting for the next podcast about the Sundar & Prabhakar Comedy Tour.
Stephen E Arnold, March 19, 2024
Microsoft Decides to Work with CISPE on Cloudy Concerns
March 19, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Perhaps a billion and a half dollars in fines can make a difference to a big tech company after all. In what looks like a move to avoid more regulatory scrutiny, Yahoo Finance reports, “Microsoft in Talks to End Trade Body’s Cloud Computing Complaint.” The trade body here is CISPE, a group of firms that provide cloud services in Europe. Amazon is one of those, but 26 smaller companies are also members. The group asserts certain changes Microsoft made to its terms of service in October of 2022 have harmed Europe’s cloud computing ecosystem. How, exactly, is unclear. Writer Foo Yun Chee tells us:
“[CISPE] said it had received several complaints about Microsoft, including in relation to its product Azure, which it was assessing based on its standard procedures, but declined to comment further. Azure is Microsoft’s cloud computing platform. CISPE said the discussions were at an early stage and it was uncertain whether these would result in effective remedies but said ‘substantive progress must be achieved in the first quarter of 2024’. ‘We are supportive of a fast and effective resolution to these harms but reiterate that it is Microsoft which must end its unfair software licensing practices to deliver this outcome,’ said CISPE secretary general Francisco Mingorance. Microsoft, which notched up 1.6 billion euros ($1.7 billion) in EU antitrust fines in the previous decade, has in recent years changed its approach towards regulators to a more accommodative one.”
Just how accommodating with Microsoft will be remains to be seen.
Cynthia Murrell, March 19, 2024
Harvard University: William James Continues Spinning in His Grave
March 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
William James, the brother of a novelist which caused my mind to wander just thinking about any one of his 20 novels, loved Harvard University. In a speech at Stanford University, he admitted his untoward affection. If one wanders by William’s grave in Cambridge Cemetery (daylight only, please), one can hear a sound similar to a giant sawmill blade emanating from the a modest tombstone. “What’s that horrific sound?” a by passer might ask. The answer: “William is spinning in his grave. It a bit like a perpetual motion machine now,” one elderly person says. “And it is getting louder.”
William is spinning in his grave because his beloved Harvard appears to foster making stuff up. Thanks, MSFT Copilot. Working on security today or just getting printers to work?
William is amping up his RPMs. Another distinguished Harvard expert, professor, shaper of the minds of young men and women and thems has been caught fabricating data. This is not the overt synthetic data shop at Stanford University’s Artificial Intelligence Lab and the commercial outfit Snorkel. Nope. This is just a faculty member who, by golly, wanted to be respected it seems.
The Chronicle of Higher Education (the immensely popular online information service consumed by thumb typers and swipers) published “Here’s the Unsealed Report Showing How Harvard Concluded That a Dishonesty Expert Committed Misconduct.” (Registration required because, you know, information about education is sensitive and users must be monitored.) The report allegedly required 1,300 pages. I did not read it. I get the drift: Another esteemed scholar just made stuff up. In my lingo, the individual shaped reality to support her / its vision of self. Reality was not delivering honor, praise, rewards, money, and freedom from teaching horrific undergraduate classes. Why not take the Excel macro to achievement: Invent and massage information. Who is going to know?
The write up says:
the committee wrote that “she does not provide any evidence of [research assistant] error that we find persuasive in explaining the major anomalies and discrepancies.” Over all, the committee determined “by a preponderance of the evidence” that Gino “significantly departed from accepted practices of the relevant research community and committed research misconduct intentionally, knowingly, or recklessly” for five alleged instances of misconduct across the four papers. The committee’s findings were unanimous, except for in one instance. For the 2012 paper about signing a form at the top, Gino was alleged to have falsified or fabricated the results for one study by removing or altering descriptions of the study procedures from drafts of the manuscript submitted for publication, thus misrepresenting the procedures in the final version. Gino acknowledged that there could have been an honest error on her part. One committee member felt that the “burden of proof” was not met while the two other members believed that research misconduct had, in fact, been committed.
Hey, William, let’s hook you up to a power test dynamometer so we can determine exactly how fast you are spinning in your chill, dank abode. Of course, if the data don’t reveal high-RPM spinning, someone at Harvard can be enlisted to touch up the data. Everyone seems to be doing from my vantage point in rural Kentucky.
Is there a way to harness the energy of professors who may cut corners and respected but deceased scholars to do something constructive? Oh, look. There’s a protest group. Let’s go ask them for some ideas. On second thought… let’s not.
Stephen E Arnold, March 15, 2024
AI Limits: The Wind Cannot Hear the Shouting. Sorry.
March 14, 2024
This essay is the work of a dumb dinobaby. No smart software required.
One of my teachers had a quote on the classroom wall. It was, I think, from a British novelist. Here’s what I recall:
Decide on what you think is right and stick to it.
I never understood the statement. In school, I was there to learn. How could I decide whether what I was reading was correct. Making a decision about what I thought was stupid because I was uninformed. The notion of “stick” is interesting and also a little crazy. My family was going to move to Brazil, and I knew that sticking to what I did in the Midwest in the 1950s would have to change. For one thing, we had electricity. The town to which we were relocating had electricity a few hours each day. Change was necessary. Even as a young sprout, trying to prevent something required more than talk, writing a Letter to the Editor, or getting a petition signed.
I thought about this crazy quote as soon as I read “AI Bioweapons? Scientists Agree to Policies to Reduce Risk of Human Disaster.” The fear mongering note of the write up’s title intrigued me. Artificial intelligence is in what I would call morph mode. What this means is that getting a fix on what is new and impactful in the field of artificial intelligence is difficult. An electrical engineering publication reported that experts are not sure if what is going on is good or bad.
Shouting into the wind does not work for farmers nor AI scientists. Thanks, MSFT Copilot. Busy with security again?
The “AI Bioweapons” essay is leaning into the bad side of the AI parade. The point of the write up is that “over 100 scientists” want to “prevent the creation of AI bioweapons.” The article states:
The agreement, crafted following a 20230 University of Washington summit and published on Friday, doesn’t ban or condemn AI use. Rather, it argues that researchers shouldn’t develop dangerous bioweapons using AI. Such an ask might seem like common sense, but the agreement details guiding principles that could help prevent an accidental DNA disaster.
That sounds good, but is it like the quote about “decide on what you think is right and stick to it”? In a dynamic environment, change is appears to accelerate. Toss in technology and the potential for big wins (either financial, professional, or political), and the likelihood of slowing down the rate of change is reduced.
To add some zip to the AI stew, much of the technology required to do some AI fiddling around is available as open source software or low-cost applications and APIs.
I think it is interesting that 100 scientists want to prevent something. The hitch in the git-along is that other countries have scientists who have access to AI research, tools, software, and systems. These scientists may feel as thought their reminding people that doom is (maybe?) just around the corner or a ruined building in an abandoned town on Route 66.
Here are a few observations about why individuals rally around a cause, which is widely perceived by some of those in the money game as the next big thing:
- The shouters perception of their importance makes it an imperative to speak out about danger
- Getting a group of important, smart people to climb on a bandwagon makes the organizers perceive themselves as doing something important and demonstrating their “get it done” mindset
- Publicity is good. It is very good when a speaking engagement, a grant, or consulting gig produces a little extra fame and money, preferably in a combo.
Net net: The wind does not listen to those shouting into it.
Stephen E Arnold, March 14, 2024
AI Deepfakes: Buckle Up. We Are in for a Wild Drifting Event
March 14, 2024
This essay is the work of a dumb dinobaby. No smart software required.
AI deepfakes are testing the uncanny valley but technology is catching up to make them as good as the real thing. In case you’ve been living under a rock, deepfakes are images, video, and sound clips generated by AI algorithms to mimic real people and places. For example, someone could create a deepfake video of Joe Biden and Donald Trump in a sumo wrestling match. While the idea of the two presidential candidates duking it out on a sumo mat is absurd, technology is that advanced.
Gizmodo reports the frustrating news that “The AI Deepfakes Problem Is Going To Get Unstoppably Worse”. Bad actors are already using deepfakes to wreak havoc on the world. Federal regulators outlawed robocalls and OpenAI and Google released watermarks on AI-generated images. These aren’t doing anything to curb bad actors.
Which is real? Which is fake? Thanks, MSFT Copilot, the objects almost appear identical. Close enough like some security features. Close enough means good enough, right?
New laws and technology need to be adopted and developed to prevent this new age of misinformation. There should be an endless amount of warnings on deepfake videos and soundbites, not to mention service providers should employ them too. It is going to take a horrifying event to make AI deepfakes more prevalent:
"Deepfake detection technology also needs to get a lot better and become much more widespread. Currently, deepfake detection is not 100% accurate for anything, according to Copyleaks CEO Alon Yamin. His company has one of the better tools for detecting AI-generated text, but detecting AI speech and video is another challenge altogether. Deepfake detection is lagging generative AI, and it needs to ramp up, fast.”
Wired Magazine missed an opportunity to make clear that the wizards at Google can sell data and advertising, but the sneaker-wearing marvels cannot manage deepfake adult pictures. Heck, Google cannot manage YouTube videos teaching people how to create deepfakes. My goodness, what happens if one uploads ASCII art of a problematic item to Gemini? One of my team tells me that the Sundar & Prabhakar guard rails, don’t work too well in some situations.
Not every deepfake will be as clumsy as the one the “to be maybe” future queen of England finds herself ensnared. One can ask Taylor Swift I assume.
Whitney Grace’s March 14, 2024
Can Your Job Be Orchestrated? Yes? Okay, It Will Be Smartified
March 13, 2024
This essay is the work of a dumb dinobaby. No smart software required.
My work career over the last 60 years has been filled with luck. I have been in the right place at the right time. I have been in companies which have been acquired, reassigned, and exposed to opportunities which just seemed to appear. Unlike today’s young college graduate, I never thought once about being able to get a “job.” I just bumbled along. In an interview for something called Singularity, the interviewer asked me, “What’s been the key to your success?” I answered, “Luck.” (Please, keep in mind that the interviewer assumed I was a success, but he had no idea that I did not want to be a success. I just wanted to do interesting work.)
Thanks, MSFT Copilot. Will smart software do your server security? Ho ho ho.
Would I be able to get a job today if I were 20 years old? Believe it or not, I told my son in one of our conversations about smart software: “Probably not.” I thought about this comment when I read today (March 13, 2024) the essay “Devin AI Can Write Complete Source Code.” The main idea of the article is that artificial intelligence, properly trained, appropriately resourced can do what only humans could do in 1966 (when I graduated with a BA degree from a so so university in flyover country). The write up states:
Devin is a Generative AI Coding Assistant developed by Cognition that can write and deploy codes of up to hundreds of lines with just a single prompt. Although there are some similar tools for the same purpose such as Microsoft’s Copilot, Devin is quite the advancement as it not only generates the source code for software or website but it debugs the end-to-end before the final execution.
Let’s assume the write up is mostly accurate. It does not matter. Smart software will be shaped to deliver what I call orchestrated solutions either today, tomorrow or next month. Jobs already nuked by smartification are customer service reps, boilerplate writing jobs (hello, McKinsey), and translation. Some footloose and fancy free gig workers without AI skills may face dilemmas about whether to pursue begging, YouTubing the van life, or doing some spelunking in the Chemical Abstracts database for molecular recipes in a Walmart restroom.
The trajectory of applied AI is reasonably clear to me. Once “programming” gets swept into the Prada bag of AI, what other professions will be smartified? Once again, the likely path is light by dim but visible Alibaba solar lights for the garden:
- Legal tasks which are repetitive even though the cases are different, the work flow is something an average law school graduate can master and learn to loathe
- Forensic accounting. Accountants are essentially Ground Hog Day people, because every tax cycle is the same old same old
- Routine one-day surgeries. Sorry, dermatologists, cataract shops, and kidney stone crunchers. Robots will do the job and not screw up the DRG codes too much.
- Marketers. I know marketing requires creative thinking. Okay, but based on the Super Bowl ads this year, I think some clients will be willing to give smart software a whirl. Too bad about filming a horse galloping along the beach in Half Moon Bay though. Oh, well.
That’s enough of the professionals who will be affected by orchestrated work flows surfing on smartified software.
Why am I bothering to write down what seems painfully obvious to my research team?
I just wanted another reason to say, “I am glad I am old.” What many young college graduates will discover that despite my “luck” over the course of my work career, smartified software will not only kill some types of work. Smart software will remove the surprise in a serendipitous life journey.
To reiterate my point: I am glad I am old and understand efficiency, smartification, and the value of having been lucky.
Stephen E Arnold, March 13, 2024