Batting Google and Whiffing the Chance

December 6, 2024

animated-dinosaur-image-0062_thumb_thumb_thumb_thumbThis is the work of a dinobaby. Smart software helps me with art, but the actual writing? Just me and my keyboard.

I read “The AI War Was Never Just about AI.” Okay, AI war. We have a handful of mostly unregulated technology companies, a few nation states, and some unknown wizards working in their family garage. The situation is that a very tiny number of companies are fighting to become de facto reality definers for the next few years, maybe a decade or two. Against that background, does a single country’s judiciary think it can “regulate” an online company. One pragmatic approach has been to ban a service, the approach taken by Australia, China, Iran, and Russia among others. A less popular approach would be to force the organization out of business by arresting key executives, seizing assets, and imposing penalties on that organization’s partners. Does that sound a bit over the top?

The cited article does not go to the pit in the apricot. Google has been allowed to create an interlocking group of services which permeate the fabric of global online activity. There is no entertainment for some people in Armenia except YouTube. There are few choices to promote a product online without bumping into the Disney style people herders who push those who want to sell toward Google’s advertising systems. There is no getting from Point A to Point B without Google’s finding services whether dolled up in an AI wrapper, a digital version of a map, or a helpful message on the sign of a lawn service truck for Google Local.

The write up says:

The government wants to break up Google’s monopoly over the search market, but its proposed remedies may in fact do more to shape the future of AI. Google owns 15 products that serve at least half a billion people and businesses each—a sprawling ecosystem of gadgets, search and advertising, personal applications, and enterprise software. An AI assistant that shows up in (or works well with) those products will be the one that those people are most likely to use. And Google has already woven its flagship Gemini AI models into Search, Gmail, Maps, Android, Chrome, the Play Store, and YouTube, all of which have at least 2 billion users each. AI doesn’t have to be life-changing to be successful; it just has to be frictionless.

Okay. With a new administration taking the stage, how will this goal of leveling the playing field work. The legal processes at Google’s disposal mean that whatever the US government does can be appealed. Appeals take time. Who lasts longer? A government lawyer working under the thumb of DOGE and budget cutting or a giant outfit like Google? My view is that Google has more lawyers and more continuity.

Second, breaking up Google may face some headwinds from government entities quite dependent on its activities. The entire OSINT sector looks to Google for nuggets of information. It is possible some government agencies have embedded Google personnel on site. The “advertising” industry depends on distribution via the online stores of Apple and Google. Why is this important? The data brokers repackage the app data into data streams consumed by some government agencies and their contractors.

The write up says:

This is why it’s relevant that the DOJ’s proposed antitrust remedy takes aim at Google’s broader ecosystem. Federal and state attorneys asked the court to force Google to sell off its Chrome browser; cease preferencing its search products in the Android mobile operating system; prevent it from paying other companies, including Apple and Samsung, to make Google the default search engine; and allow rivals to syndicate Google’s search results and use its search index to build their own products. All of these and the DOJ’s other requests, under the auspices of search, are really shots at Google’s expansive empire.

So after more than 20 years of non regulation and hand slapping, the current legal decision is going to take apart an entity which is more like a cancer than a telephone company like AT&T. IBM was mostly untouched by the US government as was Microsoft. Now I am to to believe that a vastly different type of commercial enterprise which is for some functions more robust and effective than a government can have its wings clipped.

Is the Department of Justice concerned about AI? Come on. The DoJ personnel are thinking about the Department of Government Efficiency, presidential retribution, and enhancing LinkedIn profiles.

We are not in Kansas any longer where there is no AI war.

Stephen E Arnold, December 6, 2024

The Very Expensive AI Horse Race

December 4, 2024

animated-dinosaur-image-0065This write up is from a real and still-alive dinobaby. If there is art, smart software has been involved. Dinobabies have many skills, but Gen Z art is not one of them.

One of the academic nemeses of smart software is a professional named Gary Marcus. Among his many intellectual accomplishments is cameo appearance on a former Jack Benny child star’s podcast. Mr. Marcus contributes his views of smart software to the person who, for a number of years, has been a voice actor on the Simpsons cartoon.

image

The big four robot stallions are racing to a finish line. Is the finish line moving away from the equines faster than the steeds can run? Thanks, MidJourney. Good enough.

I want to pay attention to Mr. Marcus’ Substack post “A New AI Scaling Law Shell Game?” The main idea is that the scaling law has entered popular computer jargon. Once the lingo of Galileo, scaling law now means that AI, like CPUs, are part of the belief that technology just gets better as it gets bigger.

In this essay, Mr. Marcus asserts that getting bigger may not work unless humanoids (presumably assisted by AI0 innovate other enabling processes. Mr. Marcus is aware of the cost of infrastructure, the cost of electricity, and the probable costs of exhausting content.

From my point of view, a bit more empirical “evidence” would be useful. (I am aware of academic research fraud.) Also, Mr. Marcus references me when he says keep your hands on your wallet. I am not sure that a fix is possible. The analogy is the old chestnut about changing a Sopwith Camel’s propeller when the aircraft is in a dogfight and the synchronized machine gun is firing through the propeller.

I want to highlight one passage in Mr. Marcus’ essay and offer a handful of comments. Here’s the passage I noted:

Over the last few weeks, much of the field has been quietly acknowledging that recent (not yet public) large-scale models aren’t as powerful as the putative laws were predicting. The new version is that there is not one scaling law, but three: scaling with how long you train a model (which isn’t really holding anymore), scaling with how long you post-train a model, and scaling with how long you let a given model wrestle with a given problem (or what Satya Nadella called scaling with “inference time compute”).

I think this is a paragraph I will add to my quotes file. The reasons are:

First, investors, would be entrepreneurs, and giant outfits really want a next big thing. Microsoft fired the opening shot in the smart software war in early 2023. Mr. Nadella suggested that smart software would be the next big thing for Microsoft. The company has invested in making good on this statement. Now Microsoft 365 is infused with smart software and Azure is burbling with digital glee with its “we’re first” status. However, a number of people have asked, “Where’s the financial payoff?” The answer is standard Silicon Valley catechism: The payoff is going to be huge. Invest now.” If prayers could power hope, AI is going to be hyperbolic just like the marketing collateral for AI promises. But it is almost 2025, and those billions have not generated more billions and profit for the Big Dogs of AI. Just sayin’.

Second, the idea that the scaling law is really multiple scaling laws is interesting. But if one scaling law fails to deliver, what happens to the other scaling laws? The interdependencies of the processes for the scaling laws might evoke new, hitherto identified scaling laws. Will each scaling law require massive investments to deliver? Is it feasible to pay off the investments in these processes with the original concept of the scaling law as applied to AI. I wonder if a reverse Ponzi scheme is emerging. The more pumped in the smaller the likelihood of success. Is AI a demonstration of convergence or The mathematical property you’re describing involves creating a sequence of fractions where the numerator is 1 and the denominator is an increasing sequence of integers. Just askin’.

Third, the performance or knowledge payoff I have experienced with my tests of OpenAI and the software available to me on You.com makes clear that the systems cannot handle what I consider routine questions. A recent example was my request to receive a list of the exhibitors at the November 1 Gateway Conference held in Dubai for crypto fans of Telegram’s The Open Network Foundation and TON Social. The systems were unable to deliver the lists. This is just one notable failure which a humanoid on my research team was able to rectify in an expeditious manner. (Did you know the Ku Group was on my researcher’s list?) Just reportin’.

Net net: Will AI repay the billions sunk into the data centers, the legal fees (many still looming), the staff, and the marketing? If you ask an accelerationist, the answer is, “Absolutely.” If you ask a dinobaby, you may hear, “Maybe, but some fundamental innovations are going to be needed.” If you ask an AI will kill us all type like the Xoogler Mo Gawdat, you will hear, “Doom looms.”  Just dinobabyin’.

Stephen E Arnold, December 4, 2024

The New Coca Cola of Marketing: Xmas Ads

December 4, 2024

Though Coca-Cola has long purported that “It’s the Real Thing,” a recent ad is all fake. NBC News reports, “Coca-Cola Causes Controversy with AI-Made Ad.” We learn:

“Coca-Cola is facing backlash online over an artificial intelligence-made Christmas promotional video that users are calling ‘soulless’ and ‘devoid of any actual creativity.’ The AI-made video features everything from big red Coca-Cola trucks driving through snowy streets to people smiling in scarves and knitted hats holding Coca-Cola bottles. The video was meant to pay homage to the company’s 1995 commercial ‘Holidays Are Coming,’ which featured similar imagery, but with human actors and real trucks.”

The company’s last ad generated with AI, released earlier this year, did not face similar backlash. Is that because, as University of Wisconsin-Madison’s Neeraj Arora suggests, Coke’s Christmas ads are somehow sacrosanct? Or is it because March’s Masterpiece is actually original, clever, and well executed? Or because the artworks copied in that ad are treated with respect and, for some, clearly labeled? Whatever the reason, the riff on Coca-Cola’s own classic 1995 ad missed the mark.

Perhaps it was just too soon. It may be a matter of when, not if, the public comes to accept AI-generated advertising as the norm. One thing is certain: Coca Cola knows how to make sure marketing professors teach memorable case examples of corporate “let’s get hip” thinking.

Cynthia Murrell, December 4, 2024

New Concept: AI High

December 3, 2024

Is the AI hype-a-thon finally slowing? Nope. And our last nerves may not be the only thing to suffer. The AI industry could be shooting itself in the foot. ComputerWorld predicts, “AI Is on a Fast Track, but Hype and Immaturity Could Derail It.” Writer Scot Finnie reports:

“The hype is so extreme that a fall-out, which Gartner describes in its technology hype cycle reports as the ‘trough of disillusionment,’ seems inevitable and might be coming this year. That’s a testament to both genAI’s burgeoning potential and a sign of the technology’s immaturity. The outlook for deep learning for predictive models and genAI for communication and content generation is bright. But what’s been rarely mentioned amid the marketing blitz of recent months is that the challenges are also formidable. Machine learning tools are only as good as the data they’re trained with. Companies are finding that the millions of dollars they’ve spent on genAI have yielded lackluster ROI because their data is filled with contradictions, inaccuracies, and omissions. Plus, the hype surrounding the technology makes it difficult to see that many of the claimed benefits reside in the future, not the present.”

Oops. The article notes some of the persistent problems with generative AI, like hallucinations, repeated crashes ,and bias. Then there are the uses bad actors have for these tools, from phishing scams to deepfakes. For investors, disappointing results and returns are prompting second thoughts. None of this means AI is completely worthless, Finnie advises. He just suggests holding off until the rough edges are smoothed out before going all in. Probably a good idea. Digital mushrooms.

December 3, 2024

Deepfakes: An Interesting and Possibly Pernicious Arms Race

December 2, 2024

As it turns out, deepfakes are a difficult problem to contain. Who knew? As victims from celebrities to schoolchildren multiply exponentially, USA Today asks, “Can Legislation Combat the Surge of Non-Consensual Deepfake Porn?” Journalist Dana Taylor interviewed UCLA’s John Villasenor on the subject. To us, the answer is simple: Absolutely not. As with any technology, regulation is reactive while bad actors are proactive. Villasenor seems to agree. He states:

“It’s sort of an arms race, and the defense is always sort of a few steps behind the offense, right? In other words that you make a detection tool that, let’s say, is good at detecting today’s deepfakes, but then tomorrow somebody has a new deepfake creation technology that is even better and it can fool the current detection technology. And so then you update your detection technology so it can detect the new deepfake technology, but then the deepfake technology evolves again.”

Exactly. So if governments are powerless to stop this horror, what can? Perhaps big firms will fight tech with tech. The professor dreams:

“So I think the longer term solution would have to be automated technologies that are used and hopefully run by the people who run the servers where these are hosted. Because I think any reputable, for example, social media company would not want this kind of content on their own site. So they have it within their control to develop technologies that can detect and automatically filter some of this stuff out. And I think that would go a long way towards mitigating it.”

Sure. But what can be done while we wait on big tech to solve the problem it unleased? Individual responsibility, baby:

“I certainly think it’s good for everybody, and particularly young people these days to be just really aware of knowing how to use the internet responsibly and being careful about the kinds of images that they share on the internet. … Even images that are sort of maybe not crossing the line into being sort of specifically explicit but are close enough to it that it wouldn’t be as hard to modify being aware of that kind of thing as well.”

Great, thanks. Admitting he may sound naive, Villasenor also envisions education to the (partial) rescue:

“There’s some bad actors that are never going to stop being bad actors, but there’s some fraction of people who I think with some education would perhaps be less likely to engage in creating these sorts of… disseminating these sorts of videos.”

Our view is that digital tools allow the dark side of individuals to emerge and expand.

Cynthia Murrell, December 2, 2024

AI In Group Communications: The Good and the Bad

November 29, 2024

In theory, AI that can synthesize many voices into one concise, actionable statement is very helpful. In practice, it is complicated. The Tepper School of Business at Carnegie Mellon announces, “New Paper Co-Authored by Tepper School Researchers Articulates How Large Language Models are Changing Collective Intelligence Forever.” Researchers from Tepper and other institutions worked together on the paper, which was published in Nature Human Behavior. We learn:

“[Professor Anita Williams] Woolley and her co-authors considered how LLMs process and create text, particularly their impact on collective intelligence. For example, LLMs can make it easier for people from different backgrounds and languages to communicate, which means groups can collaborate more effectively. This technology helps share ideas and information smoothly, leading to more inclusive and productive online interactions. While LLMs offer many benefits, they also present challenges, such as ensuring that all voices are heard equally.”

Indeed. The write-up continues:

“‘Because LLMs learn from available online information, they can sometimes overlook minority perspectives or emphasize the most common opinions, which can create a false sense of agreement,’ said Jason Burton, an assistant professor at Copenhagen Business School. Another issue is that LLMs can spread incorrect information if not properly managed because they learn from the vast and varied content available online, which often includes false or misleading data. Without careful oversight and regular updates to ensure data accuracy, LLMs can perpetuate and even amplify misinformation, making it crucial to manage these tools responsibly to avoid misleading outcomes in collective decision-making processes.”

In order to do so, the paper suggests, we must further explore LLMs’ ethical and practical implications. Only then can we craft effective guidelines for responsible AI summarization. Such standards are especially needed, the authors note, for any use of LLMs in policymaking and public discussions.

But not to worry. The big AI firms are all about due diligence, right?

Cynthia Murrell, November 29, 2024

AI Invents Itself: Good News?

November 28, 2024

Everyone from technology leaders to conspiracy theorists are fearful of a robot apocalypse. Functional robots are still years away from practicality, but AI in filling in that antagonist role nicely. Forbes shares how AI is on its way to creating itself: “AI That Can Invent AI Is Coming. Buckle Up.” AI is learning how to automate more tasks and will soon replace a human at a job.

In order for AI to become fully self-sufficient then it only needs to learn the job of an AI researcher. With a simple feedback loop, AI can develop superior architecture and improve on that as they advance their “research.” It might sound farfetched, but an AI developer role is simple: read about AI, invent new questions to ask, and implement experiments to test and answer those questions. It might sound too simple but algorithms are designed to automate and figure out knowledge:

For one thing, research on core AI algorithms and methods can be carried out digitally. Contrast this with research in fields like biology or materials science, which (at least today) require the ability to navigate and manipulate the physical world via complex laboratory setups. Dealing with the real world is a far gnarlier challenge for AI and introduces significant constraints on the rate of learning and progress. Tasks that can be completed entirely in the realm of “bits, not atoms” are more achievable to automate. A colorable argument could be made that AI will sooner learn to automate the job of an AI researcher than to automate the job of a plumber.

Consider, too, that the people developing cutting-edge AI systems are precisely those people who most intimately understand how AI research is done. Because they are deeply familiar with their own jobs, they are particularly well positioned to build systems to automate those activities.”

In the future, AI will develop and reinvent itself but the current AI still can’t basic facts about the Constitution or living humans correct. AI is frankly still very dumb. Humans haven’t made themselves obsolete yet, but we’re on way to doing that. Celebrate!

Whitney Grace, November 28, 2024

Early AI Adoption: Some Benefits

November 25, 2024

Is AI good or is it bad? The debate is still raging about, especially in Hollywood where writers, animators, and other creatives are demanding the technology be removed from the industry. AI, however, is a tool. It can be used for good and bad acts, but humans are the ones who initiate them. AI At Wharton investigated how users are currently adopting AI: “Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report.”

The report was based on the responses from full-time employees who worked in commercial organization with 1000 or more workers. Adoption of AI in businesses jumped from 37 % in 2023 to 72% in 2024 with high growth in human resources and marketing departments. Companies are still unsure if AI is worth the ROI. The study explains that AI will benefit companies that have adaptable organizations and measurable ROI.

The report includes charts that document the high rate of usage compared last year as well as how AI is mostly being used. It’s being used for document writing and editing, data analytics, document summarization, marketing content creation, personal marketing and advertising, internal support, customer support, fraud prevention, and report creation. AI is definitely impactful but not overwhelmingly, but the response to the new technology is positive and companies will continue to invest in it.

“Looking to the future, Gen AI adoption will enter its next chapter which is likely to be volatile in terms of investment and carry greater privacy and usage restrictions. Enthusiasm projected by new Chief AI Officer (CAIO) role additions and team expansions this year will be tempered by the reality of finding “accountable” ROI. While approximately three out of four industry respondents plan to increase Gen AI budgets next year, the majority expect growth to slow over the longer term, signaling a shift in focus towards making the most effective internal investments and building organizational structures to support sustainable Gen AI implementation. The key to successful adoption of Gen AI will be proper use cases that can scale, and measurable ROI as well as organization structures and cultures that can adapt to the new technology.”

While the responses are positive, how exactly is it being used beyond the charts. Are the users implementing AI for work short cuts, such as really slap shod content generation? I’d hate to be the lazy employee who uses AI to make the next quarterly report and didn’t double-check the information.

Whitney Grace, November 25, 2024

Point-and-Click Coding: An eGame Boom Booster

November 22, 2024

TheNextWeb explains “How AI Can Help You Make a Computer Game Without Knowing Anything About Coding.” That’s great—unless one is a coder who makes one’s living on computer games. Writer Daniel Zhou Hao begins with a story about one promising young fellow:

Take Kyo, an eight-year-old boy in Singapore who developed a simple platform game in just two hours, attracting over 500,000 players. Using nothing but simple instructions in English, Kyo brought his vision to life leveraging the coding app Cursor and also Claude, a general purpose AI. Although his dad is a coder, Kyo didn’t get any help from him to design the game and has no formal coding education himself. He went on to build another game, an animation app, a drawing app and a chatbot, taking about two hours for each. This shows how AI is dramatically lowering the barrier to software development, bridging the gap between creativity and technical skill. Among the range of apps and platforms dedicated to this purpose, others include Google’s AlphaCode 2 and Replit’s Ghostwriter.”

The write-up does not completely leave experienced coders out of the discussion. Hao notes tools like Tabnine and GitHub Copilot act as auto-complete assistance, while Sourcery and DeepCode take the tedium out of code cleanup. For the 70-ish percent of companies that have adopted one or more of these tools, he tells us, the benefits include time savings and more reliable code. Does this mean developers will to shift to “higher value tasks,” like creative collaboration and system design, as Hao insists? Or will it just mean firms will lighten their payrolls?

As for building one’s own game, the article lists seven steps. They are akin to basic advice for developing a product, but with an AI-specific twist. For those who want to know how to make one’s AI game addictive, contact benkent2020 at yahoo dot com.

Cynthia Murrell, November 22, 2024

China Smart, US Dumb: LLMs Bad, MoEs Good

November 21, 2024

Okay, an “MoE” is an alternative to LLMs. An “MoE” is a mixture of experts. An LLM is a one-trick pony starting to wheeze.

Google, Apple, Amazon, GitHub, OpenAI, Facebook, and other organizations are at the top of the list when people think about AI innovations. We forget about other countries and universities experimenting with the technology. Tencent is a China-based technology conglomerate located in Shenzhen and it’s the world’s largest video game company with equity investments are considered. Tencent is also the developer of Hunyuan-Large, the world’s largest MoE.

According to Tencent, LLMs (large language models) are things of the past. LLMs served their purpose to advance AI technology, but Tencent realized that it was necessary to optimize resource consumption while simultaneously maintaining high performance. That’s when the company turned to the next evolution of LLMs or MoE, mixture of experts models.

Cornell University’s open-access science archive posted this paper on the MoE: “Hunyuan-Large: An Open-Source MoE Model With 52 Billion Activated Parameters By Tencent” and the abstract explains it is a doozy of a model:

In this paper, we introduce Hunyuan-Large, which is currently the largest open-source Transformer-based mixture of experts model, with a total of 389 billion parameters and 52 billion activation parameters, capable of handling up to 256K tokens. We conduct a thorough evaluation of Hunyuan-Large’s superior performance across various benchmarks including language understanding and generation, logical reasoning, mathematical problem-solving, coding, long-context, and aggregated tasks, where it outperforms LLama3.1-70B and exhibits comparable performance when compared to the significantly larger LLama3.1-405B model. Key practice of Hunyuan-Large include large-scale synthetic data that is orders larger than in previous literature, a mixed expert routing strategy, a key-value cache compression technique, and an expert-specific learning rate strategy. Additionally, we also investigate the scaling laws and learning rate schedule of mixture of experts models, providing valuable insights and guidance for future model development and optimization. The code and checkpoints of Hunyuan-Large are released to facilitate future innovations and applications.”

Tencent has released Hunyuan-Large as an open source project, so other AI developers can use the technology! The well-known companies will definitely be experimenting with Hunyuan-Large. Is there an ulterior motive? Sure. Money, prestige, and power are at stake in the AI global game.

Whitney Grace, November 21, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta