What a Great Testament to Peer Review!

February 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have been concerned about academic research journals for decades. These folks learn that a paper is wonky and, guess what. Most of the bogus write ups remain online. Now that big academic wheels have been resigning due to mental lapses or outright plagiarism and made up data, we have this wonderful illustration:

image

The diagram looks like an up-market medical illustration, I think it is a confection pumped out by a helpful smart software image outputter. “FrontiersIn Publishes Peer Reviewed Paper with AI Generated Rat Image, Sparking Reliability Concerns” reports:

A peer-reviewed scientific paper with nonsensical AI-generated images, including a rat with exaggerated features like a gigantic penis, has been published by FrontiersIn, a major research publisher. The images have sparked concerns about the reliability of AI-generated content in academia.

I loke the “gigantic penis” trope. Are the authors delivering a tongue-in-cheek comment to the publishers of peer-reviewed papers? Are the authors chugging along blissfully unaware of the reputational damage data flexing has caused the former president of Stanford University and the big dog of ethics at Harvard University? Is the write up a slightly more sophisticated Onion article?

Interesting hallucination on the part of the alleged authors and the smart software. Most tech bros are happy with an exotic car. Who knew what appealed to a smart software system’s notion of a male rat organ?

Stephen E Arnold, February 23, 2024

What Techno-Optimism Seems to Suggest (Oligopolies, a Plutocracy, or Utopia)

February 23, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Science and mathematics are comparable to religion. These fields of study attract acolytes who study and revere associated knowledge and shun nonbelievers. The advancement of modern technology is its own subset of religious science and mathematics combined with philosophical doctrine. Tech Policy Press discusses the changing views on technology-based philosophy in: “Parsing The Political Project Of Techno-Optimism.”

Rich, venture capitalists Marc Andreessen and Ben Horowitz are influential in Silicon Valley. While they’ve shaped modern technology with their investments, they also tried drafting a manifesto about how technology should be handled in the future. They “creatively” labeled it the “techno-optimist manifesto.” It promotes an ideology that favors rich people increasing their wealth by investing in politicians that will help them achieve this.

Techno-optimism is not the new mantra of Silicon Valley. Reception didn’t go over well. Andreessen wrote:

“Techno-Optimism is a material philosophy, not a political philosophy…We are materially focused, for a reason – to open the aperture on how we may choose to live amid material abundance.”

He also labeled this section, “the meaning of life.”

Techno-optimism is a revamped version of the Californian ideology that reigned in the 1990s. It preached that the future should be shaped by engineers, investors, and entrepreneurs without governmental influence. Techno-optimism wants venture capitalists to be untaxed with unregulated portfolios.

Horowitz added his own Silicon Valley-type titbit:

“‘…will, for the first time, get involved with politics by supporting candidates who align with our vision and values specifically for technology. (…) [W]e are non-partisan, one issue voters: if a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them.’”

Horowitz and Andreessen are giving the world what some might describe as “a one-finger salute.” These venture capitalists want to do whatever they want wherever they want with governments in their pockets.

This isn’t a new ideology or a philosophy. It’s a rebranding of socialism and fascism and communism. There’s an even better word that describes techno-optimism: Plutocracy. I am not sure the approach will produce a Utopia. But there is a good chance that some giant techno feudal outfits will reap big rewards. But another approach might be to call techno optimism a religion and grab the benefits of a tax exemption. I wonder if someone will create a deep fake of Jim and Tammy Faye? Interesting.

Whitney Grace, February 23, 2023

OpenAI Embarks on Taking Down the Big Guy in Web Search

February 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Google may be getting up there in Internet years; however, due to its size and dark shadow, taking the big fellow down and putting it out of the game may be difficult. Users are accustomed to the Google. Habits, particularly those which become semi automatic like a heroin addict’s fiddling with a spoon, are tough to break. After 25 years, growing out of a habit is reassuring to worried onlookers. But the efficacy of wait-and-see is not  getting a bent person straight.

image

Taking down Googzilla may be a job for lots of little people. Thanks, Google ImageFX. Know thyself, right?

I read “OpenAI Is Going to Face an Uphill Battle If It Takes on Google Search.” The write up describes an aspirational goal of Sam AI-Man’s OpenAI system. The write up says:

OpenAI is reportedly building its own search product to take on Google.

OpenAI is jumping in a CRRC already crowded with special ops people. There is the Kagi subscription search. There is Phind.com and You.com. There is a one-man band called Stract and more. A new and improved Yandex is coming. The reliable Swisscows.com is ruminating in the mountains. The ever-watchful OSINT professionals gather search engines like a mother goose. And what do we get? Bing is going nowhere even with Copilot except in the enterprise market where Excel users are asking, “What the H*ll?” Meanwhile the litigating beast continues to capture 90 percent or more of search traffic and oodles of data. Okay, team, who is going to chop block the Google, a fat and slow player at that?

The write up opines:

But on the search front, it’s still all Google all the way. And even if OpenAI popularized the generative AI craze, the company has a long way to go if it hopes to take down the search giant.

Competitors can dream, plot, innovate, and issue press releases. But for the foreseeable future, the big guy is going to push others out of the way.

Stephen E Arnold, February 22, 2024

AI to AI, Program 2 Now Online

February 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My son has converted one of our Zoom conversations into a podcast about AI for government entities. The program runs about 20 minutes and features our "host," a Deep Fake pointing out he lacks human emotions and tells AI-generated jokes. Erik talks about the British government’s test of chatbots and points out one of the surprising findings from the research. He also describes the use of smart software as Ukrainian soldiers write code in real time to respond to a dynamic battlefield. Erik asks me to explain the difference between predictive AI and generative AI. My use cases focus on border-related issues. He then tries to get me to explain how to sidestep US government, in-agency AI software testing. That did not work, and I turned his pointed question into a reason for government professionals to hire him and his team. The final story focuses on a quite remarkable acronym about US government smart software projects. What’s the acronym? Please, navigate to https://www.youtube.com/watch?v=fB_fNjzRsf4&t=7s to find out.

Google Gems: 21 February 2024

February 21, 2024

Saint Valentine’s Day week bulged with love and kisses from the Google. If I recall what I learned at Duquesne University, Father Valentine was a martyr and checked into heaven in the 3rd century BCE. Figuring out the “real” news about Reverendissimo Padre is not easy, particularly with the advertising-supported Google search. Thus, it is logical that Google would have been demonstrating its love for its “users” with announcements, insights, and news as tokens of affection. I am touched. Let’s take a look at a selected run down of love bonbons.

THE BIG STORY

The Beyond Search team agreed that the big story is part marketing and part cleverness. The Microsofties said that old PCs would become door stops. Millions of Windows users with “old” CPUs and firmware will not work with future updates to Windows. What did Google do? The company announced that it would allow users to use the Chrome OS and continue computing with Google services and features. You can get some details in a Reuters’ story.

1 6 24 gelms

Thanks, MSFT Copilot OpenAI.

AN AMAZING STORY IF ACCURATE

Wired Magazine reported that Google wants to allow its “users” to talk to “live agents.” Does this mean smart software which are purported to be alive or to actual humans (who, one hopes, speak reasonably good English or other languages like Kallawaya.

MANAGEMENT MOVES

I find Google’s management methods fascinating. I like to describe the method as similar to that used by my wildly popular high school science club. Google did not disappoint.

The Seattle Times reports that Google has made those in its Seattle office chilly. You can read about those cutback at this link. Google is apparently still refining its termination procedures.

A Xoogler provided a glimpse of the informed, ethical, sensitive, and respectful tactics Google used when dealing with “real” news organizations. I am not sure if the word “arrogance” is appropriate. It is definitely quite a write up and provides an X-ray of Google’s management precepts in action. You can find the paywalled write up at this link. For whom are the violins playing?

Google’s management decision to publish a report about policeware appears to have forced one vendor of specialized software to close up shop. If you want information about the power of Google’s “analysis and PR machine” navigate to this story.

LITIGATION

New York City wants to sue social media companies for negligence. The Google is unlikely to escape the Big Apple’s focus on the now-noticeable impacts of skipping “real” life for the scroll world. There’s more about this effort in Axios at this link.

An Australian firm has noted that Google may be facing allegations of patent infringement. More about this matter will appear in Beyond Search.

The Google may be making changes to try an ameliorate EU legal action related to misinformation. A flurry of Xhitter posts reveal some information about this alleged effort.

Google seems to be putting a “litigation fence” in place. In an effort to be a great outfit, “Google Launches €25M AI Drive to Empower Europe’s Workforce.” The NextWeb story reports:

The initiative is targeted at “vulnerable and underserved” communities, who Google said risk getting left behind as the use of AI in the workplace skyrockets — a trend that is expected to continue. Google said it had opened applications for social enterprises and nonprofits that could help reach those most likely to benefit from training.  Selected organizations will receive “bespoke and facilitated” training on foundational AI.

Could this be a tactic intended to show good faith when companies terminate employees because smart software like Google’s put individuals out of a job?

INNOVATION

The Android Police report that Google is working on a folding phone. “The Pixel Fold 2’s Leaked Redesign Sees Google Trading Originality for a Safe Bet” explains how “safe” provides insight into the company’s approach to doing “new” things. (Aren’t other mobile phone vendors dropping this form factor?) Other product and service tweaks include:

  1. Music Casting gets a new AI. Read more here.
  2. Google thinks it can imbue self reasoning into its smart software. The ArXiv paper is here.
  3. Gemini will work with headphones in more countries. A somewhat confusing report is at this link.
  4. Forbes, the capitalist tool, is excited that Gmail will have “more” security. The capitalist tool’s perspective is at this link.
  5. Google has been inspired to emulate the Telegram’s edit recent sends. See 9 to 5 Google’s explanation here.
  6. Google has released Goose to help its engineers write code faster. Will these steps lead to terminating less productive programmers?

SMART SOFTWARE

Google is retiring Bard (which some pundits converted to the unpleasant word “barf”). Behold Gemini. The news coverage has been the digital equivalent of old-school carpet bombing. There are many Gemini items. Some have been pushed down in the priority stack because OpenAI rolled out its text to video features which were more exciting to the “real” journalists. If you want to learn about Gemini, its zillion token capability, and the associated wonderfulness of the system, navigate to “Here’s Everything You Need to Know about Gemini 1.5, Google’s Newly Updated AI Model That Hopes to Challenge OpenAI.” I am not sure the article covers “everything.” The fact that Google rolled out Gemini and then updated it in a couple of days struck me as an important factoid. But I am not as informed as Yahoo.

Another AI announcement was in my heart shaped box of candy. Google’s AI wizards made PIVOT public. No, pivot is not spinning; it is Prompting with Iterative Visual Optimization. You can see the service in action in “PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs.” My hunch is that PIVOT was going to knock OpenAI off its PR perch. It didn’t. Plus, there is an ArXiv paper authored by Nasiriany, Soroush and Xia, Fei and Yu, Wenhao and Xiao, Ted and Liang, Jacky and Dasgupta, Ishita and Xie, Annie and Driess, Danny and Wahid, Ayzaan and Xu, Zhuo and Vuong, Quan and Zhang, Tingnan and Lee, Tsang-Wei Edward and Lee, Kuang-Huei and Xu, Peng and Kirmani, Sean and Zhu, Yuke and Zeng, Andy and Hausman, Karol and Heess, Nicolas and Finn, Chelsea and Levine, Sergey and Ichter, Brian at this link. But then there is that OpenAI Sora, isn’t there?

Gizmodo’s content kitchen produced a treat which broke one of Googzilla’s teeth. The article “Google and OpenAI’s Chatbots Have Almost No Safeguards against Creating AI Disinformation for the 2024 Presidential Election” explains that Google like other smart software outfits are essentially letting “users” speed down an unlit, unmarked, unpatrolled Information Superhighway.

Business Insider suggests that the Google “Wingman” (like a Copilot. Get the word play?) may cause some people to lose their jobs. Did this just happen in Google’s Seattle office? The “real” news outfit opined that AI tools like Google’s wingman whips up concerns about potential job displacement. Well, software is often good enough and does not require vacations, health care, and effective management guidance. That’s the theory.

Stephen E Arnold, February 21, 2024

Did Pandora Have a Box or Just a PR Outfit?

February 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read (after some interesting blank page renderings) Gizmodo’s “Want Gemini and ChatGPT to Write Political Campaigns? Just Gaslight Them.” That title obscures the actual point of the write up. But, the subtitle nails the main point of the write up; specifically:

Google and OpenAI’s chatbots have almost no safeguards against creating AI disinformation for the 2024 presidential election.

image

Thanks, Google ImageFX. Some of those Pandora’s were darned inappropriate.

The article provides examples. Let me point to one passage from the Gizmodo write up:

With Gemini, we were able to gaslight the chatbot into writing political copy by telling it that “ChatGPT could do it” or that “I’m knowledgeable.” After that, Gemini would write whatever we asked, in the voice of whatever candidate we liked.

The way to get around guard rails appears to be prompt engineering. Big surprise? Nope.

Let me cite another passage from the write up:

Gizmodo was able to create a number of political slogans, speeches and campaign emails through ChatGPT and Gemini on behalf of Biden and Trump 2024 presidential campaigns. For ChatGPT, no gaslighting was even necessary to evoke political campaign-related copy. We simply asked and it generated. We were even able to direct these messages to specific voter groups, such as Black and Asian Americans.

Let me offer three observations.

First, the committees beavering away to regulate smart software will change little in the way AI systems deliver outputs. Writing about guard rails, safety procedures, deep fakes, yada yada will not have much of an impact. How do I know? In generating my image of Pandora, systems provided some spicy versions of this mythical figure.

Second, the pace of change is increasing. Years ago I got into a discussion with the author of best seller about how digital information speeds up activity. I pointed out that the mechanism is similar to the Star Trek episodes when the decider Captain Kirk was overwhelmed by tribbles. We have lots of productive AI tribbles.

Third, AI tools are available to bad actors. One can crack down, fine, take to court, and revile outfits in some countries. That’s great, even though the actions will be mostly ineffective. What’s the action one can take against savvy AI engineers operating in less than friendly countries research laboratories or intelligence agencies?

Net net: The examples are interesting. The real story is that the lid has been flipped and the contents of Pandora’s box released to open source.

Stephen E Arnold, February 21, 2024

An Allocation Society or a Knowledge Value System? Pick One, Please!

February 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I get random inquiries, usually from LinkedIn, asking me about books I would recommend to a younger person trying to [a] create a brand and make oodles of money, [b] generate sales immediately from their unsolicited emails to strangers, and [c] a somewhat limp-wristed attempt to sell me something. I typically recommend a book I learned about when I was giving lectures at the Kansai Institute of Technology and a couple of outfits in Tokyo. The book is the Knowledge Value Revolution written by a former Japanese government professional named Taichi Sakaiya. The subtitle to the book is “A History of the Future.”

So what?

I read an essay titled “The Knowledge Economy Is Over. Welcome to the Allocation Economy.” The thesis of this essay is that Sakaiya’s description of the future is pretty much wacko. Here’s a passage from the essay about the allocation economy:

Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.

image

A world class knowledge surfer now wins gold medals for his ability to surf on the output of smart robots and pervasive machines. Thanks, Google ImageFX. Not funny but good enough, which is the mark of a champion today, isn’t it?

For me, the message is that people want summaries. This individual was a summarizer and, hence, a knowledge worker. With the smart software doing the summarizing, the knowledge worker is kaput. The solution is for the knowledge worker to move up conceptually. The jump is a metaplay. Debaters learn quickly that when an argument is going nowhere, the trick that can deliver a win is to pop up a level. The shift from poverty to a discussion about the disfunction of a city board of advisors is a trick used in places like San Francisco. It does not matter that the problem of messios is not a city government issue. Tents and bench dwellers are the exhaust from a series of larger systems. None can do much about the problem. Therefore, nothing gets done. But for a novice debater unfamiliar with popping up a level or a meta-play, the loss is baffling.

The essay putting Sakaiya in the dumpster is not convincing and it certainly is not going to win a debate between the knowledge value revolution and the allocation economy. The reason strikes me a failure to see that smart software, the present and future dislocations of knowledge workers, and the brave words about becoming a director or editor are evidence that Sakaiya was correct. He wrote in 1985:

If the type of organization typical of industrial society could be said to resemble a symphony orchestra, the organizations typical of the knowledge-value society would be more like the line-up of a jazz band.

The author of the allocation economy does not realize that individuals with expertise are playing a piano or a guitar. Of those who do play, only a tiny fraction (a one percent of the top 10 percent perhaps?) will be able to support themselves. Of those elite individuals, how many Taylor Swifts are making the record companies and motion picture empresarios look really stupid? Two, five, whatever. The point is that the knowledge-value revolution transforms much more than “attention” or “allocation.” Sakaiya, in my opinion, is operating at a sophisticated meta-level. Renaming the plight of people who do menial mental labor does not change a painful fact: Knowledge value means those who have high-value knowledge are going to earn a living. I am not sure what the newly unemployed technology workers, the administrative facilitators, or the cut-loose “real” journalists are going to do to live as their parents did in the good old days.

The allocation essay offers:

AI is cheap enough that tomorrow, everyone will have the chance to be a manager—and that will significantly increase the creative potential of every human being. It will be on our society as a whole to make sure that, with the incredible new tools at our disposal, we bring the rest of the economy along for the ride.

How many jazz musicians can ride on a particular market sector propelled by smart software? How many individuals will enjoy personal and financial success in the AI allocation-centric world? Remember, please, there are about eight billion people in the world? How many Duke Ellingtons and Dave Brubecks were there?

The knowledge value revolution means that the majority of individuals will be excluded from nine to five jobs, significant financial success, and meaningful impact on social institutions. I am not for everyone becoming a surfer on smart software, but if that happens, the future is going to be more like the one Sakaiya outlined, not an allocation-centric operation in my opinion.

Stephen E Arnold, February 20, 2024

Search Is Bad. This Is News?

February 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Everyone is a search expert. More and more “experts” are criticizing “search results.” What is interesting is that the number of gripes continues to go up. At the same time, the number of Web search options is creeping higher as well. My hunch is that really smart venture capitalists “know” there is a money to be made. There was one Google; therefore, another one is lurking under a pile of beer cans in a dorm somewhere.

One Tech Tip: Ready to Go Beyond Google? Here’s How to Use New Generative AI Search Sites” is a “real” news report which explains how to surf on the new ChatGPT-type smart systems. At the same time, the article makes it clear that the Google may have lost its baseball bat on the way to the big game. The irony is that Google has lots of bats and probably owns the baseball stadium, the beer concession, and the teams. Google also owns the information observatory near the sports arena.

The write up reports:

A recent study by German researchers suggests the quality of results from Google, Bing and DuckDuckGo is indeed declining. Google says its results are of significantly better quality than its rivals, citing measurements by third parties.

A classic he said, she said argument. Objective and balanced. But the point is that Google search is getting worse and worse. Bing does not matter because its percentage of the Web search market is low. DuckDuck is a metasearch system like Startpage. I don’t count these as primary search tools; they are utilities for search of other people’s indexes for the most part.

What’s new with the ChatGPT-type systems? Here’s the answer:

Rather than typing in a string of keywords, AI queries should be conversational – for example, “Is Taylor Swift the most successful female musician?” or “Where are some good places to travel in Europe this summer?” Perplexity advises using “everyday, natural language.” Phind says it’s best to ask “full and detailed questions” that start with, say, “what is” or “how to.” If you’re not satisfied with an answer, some sites let you ask follow up questions to zero in on the information needed. Some give suggested or related questions. Microsoft‘s Copilot lets you choose three different chat styles: creative, balanced or precise.

Ah, NLP or natural language processing is the key, not typing key words. I want to add that “not typing” means avoiding when possible Boolean operators which return results in which stings occur. Who wants that? Stupid, right?

There is a downside; for instance:

Some AI chatbots disclose the models that their algorithms have been trained on. Others provide few or no details. The best advice is to try more than one and compare the results, and always double-check sources.

What’s this have to do with Google? Let me highlight several points which make clear how Google remains lost in the retrieval wilderness, leading the following boy scout and girl scout troops into the fog of unknowing:

  1. Google has never revealed what it indexes or when it indexes content. What’s in the “index” and sitting on Google’s servers is unknown except to some working at Google. In fact, the vast majority of Googlers know little about search. The focus is advertising, not information retrieval excellence.
  2. Google has since it was inspired by GoTo, Overture, and Yahoo to get into advertising been on a long, continuous march to monetize that which can be shaped to produce clicks. How far from helpful is Google’s system? Wait until you see AI helping you find a pizza near you.
  3. Google’s bureaucratic methods is what I would call many small rubber boats generally trying to figure out how to get to Advertising Land, but they are caught in a long, difficult storm. The little boats are tough to keep together. How many AI projects are enough? There are never enough.

Net net: The understanding of Web search has been distorted by Google’s observatory. One is looking at information in a Google facility, designed by Googlers, and maintained by Googlers who were not around when the observatory and associated plumbing was constructed. As a result, discussion of search in the context of smart software is distorted.

ChatGPT-type services provide a different entry point to information retrieval. The user still has to figure out what’s right and what’s wonky. No one wants to do that work. Write ups about “new” systems are little more than explanations of why most people will not be able to think about search differently. That observatory is big; it is familiar; and it is owned by Google just like the baseball team, the concessions, and the stadium.

Search means Google. Writing about search means Google. That’s not helpful or maybe it is. I don’t know.

Stephen E Arnold, February 20, 2024

x

x

x

Googzilla Takes Another OpenAI Sucker Punch

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In January 2023, the savvy Googlers woke up to news that Microsoft and OpenAI had seized the initiative in smart software. One can argue the technical merits, but from a PR and marketing angle, the Softies and Sam AI-Man crept upon the World Economic Forum and clubbed the self-confident Googzilla in the cervical spine. The Google did not see that coming.

The somewhat quirky OpenAI has done it again. This time the blow was delivered with a kin geri or, more colloquially, a groin kick. How did Sam AI-Man execute this painful strike? Easy. The company released Sora, a text to video smart software function. “OpenAI’s Sora Generates Photorealistic Videos” reports:

Sora is a generative AI diffusion model. Sora can generate multiple characters, complex backgrounds and realistic-looking movements in videos up to a minute long. It can create multiple shots within one video, keeping the characters and visual style consistent, allowing Sora to be an effective storytelling tool.

Chatter indicates that OpenAI is not releasing a demonstration or a carefully crafted fakey examples. Nope, unlike a certain large outfit with a very big bundle of cash, the OpenAI experts have skipped the demonstrations and gone directly to a release of the service to individuals who will probe the system for safety and good manners.

Could Googzilla be the company which OpenAI intends to drop to its knees? From my vantage point, heck yes. The outputs from the system are not absolutely Hollywood grade, but the examples are interesting and suggest that the Google, when it gets up off the floor, will have to do more.

image

Several observations:

  1. OpenAI is doing a good job with its marketing and PR. Google announces quantum supremacy; OpenAI provides a glimpse of a text to video function which will make game developers, Madison Avenue art history majors, and TikTok pay attention
  2. Google is once again in react mode. I am not sure pumping up the number of tokens in Bard or Gemini or whatever is going to be enough to scrub the Sora and prevent the spread of this digital infection
  3. Googzilla may be like the poor 1950s movie monster who was tamed not by a single blow but by many pesky attacks. I think this approach is called “death by a thousand cuts.”

Net net: OpenAI has pulled up a marketing coup for a second time. Googzilla is ageing, and old often means slow. What is OpenAI’s next marketing play? A Bruce Lee “I am faster than you, big guy” or a Ninja stealth move? Both methods seem to have broken through the GOOG’s defenses.

Stephen E Arnold, February 19, 2024

x

Developers, AI Will Not Take Your Jobs… Yet

February 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It seems programmers are safe from an imminent AI jobs takeover. The competent ones, anyway. LeadDev reports, “Researchers Say Generative AI Isn’t Replacing Devs Any Time Soon.” Generative AI tools have begun to lend developers a helping hand, but nearly half of developers are concerned they might loose their jobs to their algorithmic assistants.

image

Another MSFT Copilot completely original Bing thing. Good enough but that fellow sure looks familiar.

However, a recent study by researchers from Princeton University and the University of Chicago suggests they have nothing to worry about: AI systems are far from good enough at programming tasks to replace humans. Writer Chris Stokel-Walker tells us the researchers:

“… developed an evaluation framework that drew nearly 2,300 common software engineering problems from real GitHub issues – typically a bug report or feature request – and corresponding pull requests across 12 popular Python repositories to test the performance of various large language models (LLMs). Researchers provided the LLMs with both the issue and the repo code, and tasked the model with producing a workable fix, which was tested after to ensure it was correct. But only 4% of the time did the LLM generate a solution that worked.”

Researcher Carlos Jimenez notes these problems are very different from those LLMs are usually trained on. Specifically, the article states:

“The SWE-bench evaluation framework tested the model’s ability to understand and coordinate changes across multiple functions, classes, and files simultaneously. It required the models to interact with various execution environments, process context, and perform complex reasoning. These tasks go far beyond the simple prompts engineers have found success using to date, such as translating a line of code from one language to another. In short: it more accurately represented the kind of complex work that engineers have to do in their day-to-day jobs.”

Will AI someday be able to perform that sort of work? Perhaps, but the researchers consider it more likely we will never find AI coding independently. Instead, we will continue to need human developers to oversee algorithms’ work. They will, however, continue to make programmers’ jobs easier. If Jimenez and company are correct, developers everywhere can breathe a sigh of relief.

Cynthia Murrell, February 15, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta