Google: Making a Buck Is the Name of the Game

December 30, 2024

animated-dinosaur-image-0049This blog post was crafted by a still-living dinobaby.

This is a screenshot of YouTube with an interesting advertisement. Take a look:

image

Here’s a larger version of the ad:

image

Now here’s the landing page for the teaser which looks like a link to a video:

image

The site advertising on YouTube.com is Badgeandwallet.com. The company offers a number of law enforcement related products. Here’s a sample of the badges available to a person exploring the site:

image

How many law enforcement officers are purchasing badges from an ad on YouTube? At some US government facilities, shops will provide hats and jackets with agency identification on them. However, to make a purchase, a visitor to the store must present current credentials.

YouTube.com and its parent are under scrutiny for a number of the firm’s business tactics. I reacted negatively to the inclusion of this advertisement in search results related to real estate in Beverly Hills, California.

Is Google the brilliant smart software company it says it is, or is the company just looking to make a buck with ads likely to be viewed by individuals who have little or nothing to do with law enforcement or government agencies?

I hope that 2025 will allow Google to demonstrate that it wants to be viewed as a company operating with a functioning moral compass. My hunch is that I will be disappointed as I have been with quantum supremacy and Googley AI.

Stephen E Arnold, December 30, 2025

AI Video Is Improving: Hello, Hollywood!

December 30, 2024

Has AI video gotten scarily believable? Well, yes. For anyone who has not gotten the memo, The Guardian declares, “Video Is AI’s New Frontier—and It Is so Persuasive, We Should All Be Worried.” Writer Victoria Turk describes recent developments:

“Video is AI’s new frontier, with OpenAI finally rolling out Sora in the US after first teasing it in February, and Meta announcing its own text-to-video tool, Movie Gen, in October. Google made its Veo video generator available to some customers this month. Are we ready for a world in which it is impossible to discern which of the moving images we see are real?”

Ready or not, here it is. No amount of hand-wringing will change that. Turk mentions ways bad actors abuse the technology: Scammers who impersonate victims’ loved ones to extort money. Deepfakes created to further political agendas. Fake sexual images and videos featuring real people. She also cites safeguards like watermarks and content restrictions as evidence AI firms understand the potential for abuse.

But the author’s main point seems to be more philosophical. It was prompted by convincing fake footage of a tree frog, documentary style. She writes:

“Yet despite the technological feat, as I watched the tree frog I felt less amazed than sad. It certainly looked the part, but we all knew that what we were seeing wasn’t real. The tree frog, the branch it clung to, the rainforest it lived in: none of these things existed, and they never had. The scene, although visually impressive, was hollow.”

Turk also laments the existence of this Meta-made baby hippo, which she declares is “dead behind the eyes.” Is it though? Either way, these experiences led Turk to ponders a bleak future in which one can never know which imagery can be trusted. She concludes with this anecdote:

“I was recently scrolling through Instagram and shared a cute video of a bunny eating lettuce with my husband. It was a completely benign clip – but perhaps a little too adorable. Was it AI, he asked? I couldn’t tell. Even having to ask the question diminished the moment, and the cuteness of the video. In a world where anything can be fake, everything might be.”

That is true. An important point to remember when we see footage of a politician doing something horrible. Or if we get a distressed call from a family member begging for money. Or if we see a cute animal video but prefer to withhold the dopamine rush lest it turn out to be fake.

Cynthia Murrell, December 30, 2024

Geolocation Data: Available for a Price

December 30, 2024

According to a report from 404 Media, a firm called Fog Data Science is helping law enforcement compile lists of places visited by suspects. Ars Technica reveals, “Location Data Firm Helps Police Find Out When Suspects Visited their Doctor.” Writer Jon Brodkin writes:

“Fog Data Science, which says it ‘harness[es] the power of data to safeguard national security and provide law enforcement with actionable intelligence,’ has a ‘Project Intake Form’ that asks police for locations where potential suspects and their mobile devices might be found. The form, obtained by 404 Media, instructs police officers to list locations of friends’ and families’ houses, associates’ homes and offices, and the offices of a person’s doctor or lawyer. Fog Data has a trove of location data derived from smartphones’ geolocation signals, which would already include doctors’ offices and many other types of locations even before police ask for information on a specific person. Details provided by police on the intake form seem likely to help Fog Data conduct more effective searches of its database to find out when suspects visited particular places. The form also asks police to identify the person of interest’s name and/or known aliases and their ‘link to criminal activity.’ ‘Known locations a POI [Person of Interest] may visit are valuable, even without dates/times,’ the form says. It asks for street addresses or geographic coordinates.”

See the article for an image of the form. It is apparently used to narrow down data points and establish suspects’ routine movements. It could also be used to, say, prosecute abortions, Brodkin notes.

Back in 2022, the Electronic Frontier Foundation warned of Fog Data’s geolocation data horde. Its report detailed which law enforcement agencies were known to purchase Fog’s intel at the time. But where was Fog getting this data? From Venntel, the EFF found, which is the subject of a Federal Trade Commission action. The agency charges Venntel with “unlawfully tracking and selling sensitive location data from users, including selling data about consumers’ visits to health-related locations and places of worship.” The FTC’s order would prohibit Venntel, and parent company Gravy Analytics, from selling sensitive location data. It would also require they establish a “sensitive data location program.” We are not sure what that would entail. And we might never know: the decision may not be finalized until after the president-elect is sworn in.

Cynthia Murrell, December 30, 2024

Debbie Downer Says, No AI Payoff Until 2026

December 27, 2024

Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:

shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.

Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”

The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.

To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:

Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”

Too add to the uncertainty about US AI “dominance,” Venture Beat reports:

DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.

Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.

In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?

Stephen E Arnold, December 27, 2024

OpenAI Partners with Defense Startup Anduril to Bring AI to US Military

December 27, 2024

animated-dinosaur-image-0062_thumb_thumbNo smart software involved. Just a dinobaby’s work.

We learn from the Independent that “OpenAI Announces Weapons Company Partnership to Provide AI Tech to Military.” The partnership with Anduril represents an about-face for OpenAI. This will excite some people, scare others, and lead to remakes of the “Terminator.” Beyond Search thinks that automated smart death machines are so trendy. China also seems enthused. We learn:

“‘ChatGPT-maker OpenAI and high-tech defense startup Anduril Industries will collaborate to develop artificial intelligence-inflected technologies for military applications, the companies announced. ‘U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives,’ the companies wrote in a Wednesday statement. ‘The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.’ The companies framed the alliance as a way to secure American technical supremacy during a ‘pivotal moment’ in the AI race against China. They did not disclose financial terms.”

Of course not. Tech companies were once wary of embracing military contracts, but it seems those days are over. Why now? The article observes:

“The deals also highlight the increasing nexus between conservative politics, big tech, and military technology. Palmer Lucky, co-founder of Anduril, was an early, vocal supporter of Donald Trump in the tech world, and is close with Elon Musk. … Vice-president-elect JD Vance, meanwhile, is a protege of investor Peter Thiel, who co-founded Palantir, another of the companies involved in military AI.”

“Involved” is putting it lightly. And as readers may have heard, Musk appears to be best buds with the president elect. He is also at the head of the new Department of Government Efficiency, which sounds like a federal agency but is not. Yet. The commission is expected to strongly influence how the next administration spends our money. Will they adhere to multinational guidelines on military use of AI? Do PayPal alums have any hand in this type of deal?

Cynthia Murrell, December 27, 2024

AI Oh-Oh: Innovation Needed Now

December 27, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I continue to hear about AI whiz kids “running out of data.” When people and institutions don’t know what’s happening, it is easy to just smash and grab. The copyright litigation and the willingness of AI companies to tie up with content owners make explicit that the zoom zoom days are over.

image

A smart software wizard is wondering how to get over, under, around, or through the stone wall of exhausted content. Thanks, Grok, good enough.

The AI Revolution Is Running Out of Data. What Can Researchers Do?” is a less crazy discussion of the addictive craze which has made smart software or — wait for it — agentic intelligence the next big thing. The write up states:

The Internet is a vast ocean of human knowledge, but it isn’t infinite. And artificial intelligence (AI) researchers have nearly sucked it dry.

“Sucked it dry” and the systems still hallucinate. Guard rails prevent users from obtaining information germane to certain government investigations. The image generators refuse to display a classroom of student paying attention to mobile phones, not the teacher. Yep, dry. More like “run aground.”

The fix to running out of data, according to the write up, is:

plans to work around it, including generating new data and finding unconventional data sources.

One approach is to “find data.” The write up says:

one option might be to harvest non-public data, such as WhatsApp messages or transcripts of YouTube videos. Although the legality of scraping third-party content in this manner is untested, companies do have access to their own data, and several social-media firms say they use their own material to train their AI models. For example, Meta in Menlo Park, California, says that audio and images collected by its virtual-reality headset Meta Quest are used to train its AI.

And what about this angle?

Another option might be to focus on specialized data sets such as astronomical or genomic data, which are growing rapidly. Fei-Fei Li, a prominent AI researcher at Stanford University in California, has publicly backed this strategy. She said at a Bloomberg technology summit in May that worries about data running out take too narrow a view of what constitutes data, given the untapped information available across fields such as health care, the environment and education.

If you want more of these work arounds, please, consult the Nature article.

Several observations are warranted:

First, the current AI “revolution” is the result of many years of research and experimentation, The fact that today’s AI produces reasonably good high school essays and allows  people to interact with a search system is a step forward. However, like most search-based innovations, the systems have flaws.

Second, the use of neural networks and the creation by Google (allegedly) of the transformer has provided fuel to fire the engines of investment. The money machines are chasing the next big thing. The problem is that the costs are now becoming evident. It is tough to hide the demand for electric power. (Hey, no problem how about a modular thorium reactor. Yeah, just pick one up at Home Depot. The small nukes are next to the Honda generators.) There is the need for computation. Google can talk about quantum supremacy, but good old fashioned architecture is making Nvidia a big dog in AI. And the cost of people? It is off the chart. Forget those coding boot camps and learn to do matrix math in your head.

Third, the real world applications like those Apple is known for don’t work very well. After vaporware time, Apple is pushing OpenAI to iPhone users. Will Siri actually work? Apple cannot afford to whiff to many big plays. Do you wear your Apple headset or do you have warm and fuzzies for the 2024 Mac Mini which is a heck of a lot cheaper than some of the high power Macs from a year ago? What about Copilot in Notebook. Hey, that’s helpful to some Notepad users. How many? Well, that’s another question. How many people want smart software doing the Clippy thing with every click?

Net net: It is now time for innovation, not marketing. Which of the Big Dog AI outfits will break through the stone walls? The bigger question is, “What if it is an innovator in China?” Impossible, right?

Stephen E Arnold, December 27, 2024

Bold Allegation: Columbia, the US, and Pegasus

December 27, 2024

The United States assists its allies, but why did the Biden Administration pony up $11 million for a hacking software. DropSiteNews investigates the software, its huge price tag, and why the US bought it in: “The U.S. Bought Pegasus For Colombia With $11 Million In Cash. Now Colombians Are Asking Why.” Colombians are just as curious as Americans are why the US coughed up $11 million in cash for the Israeli hacking software.

The Colombian ambassador to the US Daniel García-Peña confirmed that Washington DC assisted his country in buying the software, so the Colombian government could track drug cartels. The software was purchased and used throughout 2021-2022. Pegasus usage stopped in 2022 and it was never used to against politicians, such as former Columbian president Ivan Duque. The Biden Administration remained in control of the Pegasus software and assured that the Columbian government only provided spying targets.

It’s understandable why Colombia’s citizens were antsy about Pegasus:

“García-Peña’s revelations come two months after Colombian President Gustavo Petro delivered a televised speech in which he revealed some of the details of the all-cash, $11-million purchase, including that it has been split across two installments, flown from Bogotá and deposited into the Tel Aviv bank account belonging to NSO Group, the company that owns Pegasus. Soon after the speech, Colombia’s attorney general opened an investigation into the purchase and use of Pegasus. In October, Petro accused the director of the NSO Group of money laundering, due to the tremendous amount of cash he transported on the flights.

The timeline of the purchase and use of Pegasus overlaps with a particularly turbulent time in Colombia. A social movement had begun protesting against Duque, while in the countryside, Colombia’s security forces were killing or arresting major guerrilla and cartel leaders. At the time, Petro, the first left-wing president in the country’s recent history, was campaigning for the presidency.”

The Pegasus is powerful hacking software and Columbians were suspicious how their government acquired it. Journalists were especially curious where the influx of cash came from. They slowly discovered it was from the United States with the intent to spy on drug cartels. Columbia is a tumultuous nation with crime worse than the wild west. Pegasus hopefully caught the worst of the bad actors.

Whitney Grace, December 27, 2024

Boxing Day Cheat Sheet for AI Marketing: Happy New Year!

December 27, 2024

Other than automation and taking the creative talent out of the entertainment industry, where is AI headed in 2025? The lowdown for the upcoming year can be found on the Techknowledgeon AI blog and its post: “The Rise Of Artificial Intelligence: Know The Answers That Makes You Sensible About AI.”

The article acts as a primer for what AI I, its advantages, and answering important questions about the technology. The questions that grab our attention are “Will AI take over humans one day?” And “Is AI an Existential Threat to Humanity?” Here’s the answer to the first question:

“The idea of AI taking over humanity has been a recurring theme in science fiction and a topic of genuine concern among some experts. While AI is advancing at an incredible pace, its potential to surpass or dominate human capabilities is still a subject of intense debate. Let’s explore this question in detail.

AI, despite its impressive capabilities, has significant limitations:

  • Lack of General Intelligence: Most AI today is classified as narrow AI, meaning it excels at specific tasks but lacks the broader reasoning abilities of human intelligence.
  • Dependency on Humans: AI systems require extensive human oversight for design, training, and maintenance.
  • Absence of Creativity and Emotion: While AI can simulate creativity, it doesn’t possess intrinsic emotions, intuition, or consciousness.

And then the second one is:

“Instead of "taking over," AI is more likely to serve as an augmentation tool:

  • Workforce Support: AI-powered systems are designed to complement human skills, automating repetitive tasks and freeing up time for creative and strategic thinking.
  • Health Monitoring: AI assists doctors but doesn’t replace the human judgment necessary for patient care.
  • Smart Assistants: Tools like Alexa or Google Assistant enhance convenience but operate under strict limitations.”

So AI has a long way to go before it replaces humanity and the singularity of surpassing human intelligence is either a long way off or might never happen.

This dossier includes useful information to understand where AI is going and will help anyone interested in learning what AI algorithms are projected to do in 2025.

Whitney Grace, December 27, 2024

2025 Consulting Jive

December 26, 2024

Here you go. I have extracted of list of the jargon one needs to write reports, give talks, and mesmerize those with a desire to be the smartest people in the room:

  • Agentic AI
  • AI governance platforms
  • Ambient invisible intelligence
  • Augmented human capability
  • Autonomous businesses
  • BBMIs (Brain-Body Machine Interfaces)
  • Brand reputation
  • Business benefits
  • Contextual awareness
  • Continuous adaptive trust model
  • Cryptography
  • Data privacy
  • Disinformation security
  • Energy-efficient computing
  • Guardrails
  • Hybrid computing
  • Human-machine synergy
  • Identity validation
  • Immersive experiences
  • Model lifecycle management
  • Multilayered adaptive learning
  • Neurological enhancement
  • Polyfunctional robots
  • Post-quantum cryptography (PQC)
  • Provenance
  • Quantum computing (QC)
  • Real-time personalization
  • Risk scoring
  • Spatial computing
  • Sustainability
  • Transparency
  • UBMIs (User-Brain Machine Interfaces)

Did this spark your enthusiasm for modern jingo jango. Hats off to the Gartner Group. Wow! Great. Is the list complete? Of course not. I left out bullish*t.

Stephen E Arnold, December 26, 2024

Modern Management Revealed and It Is Jaundiced with a Sickly Yellowish Cast

December 26, 2024

Hopping Dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis blog post is the work of an authentic dinobaby. No smart software was used.

I was zipping through the YCombinator list of “important” items and spotted this one: “Time for a Code-Yellow?: A Blunt Instrument That Works.” I associated Code Yellow with the Google knee jerk in early 2023 when Microsoft rolled out its smart software deal with OpenAI. Immediately Google was on the backfoot. Word filtered across the blogs and “real” news sources that the world’s biggest online ad outfit and most easily sued company was reeling. The company declared a “Code Yellow,” a “Code Red,” and probably a Code 300 Terahertz to really goose the Googlers.

image

Grok does a code yellow. Good enough.

I found the reaction, the fumbling, and the management imperative as wonky as McKinsey getting ensnared in its logical opioid consulting work. What will those MBAs come up with next?

The “Time for a Code Yellow” is interesting. Read it. I want to focus on a handful of supplemental observations which appeared in the comments to the citation for the article. These, I believe, make clear the “problem” that is causing many societal problems including the egregious actions of big companies, some government agencies, and those do-good non-governmental organizations.

Here we go and the italics are my observation on the individual insights:

Tubojet1321 says: “If everything is an emergency, nothing is an emergency.” Excellent observation.

nine_zeros says: “Eventually everyone learns inaction.” Yep, meetings are more important than doing.The fix is to have another meeting.

magical hippo says: “My dad used to flippantly say he had three piles of papers on his desk: “urgent”, “very urgent” and “no longer urgent”. The modern organization creates bureaucratic friction at a much faster pace.

x0x0 says: “I’m utter sh*t at management, [I] refuse to prioritize until it’s a company-threatening crisis, and I’m happy to make my team suffer for my incompetence.” Outstanding self critique.

Lammy says: “The etymology is not green/yellow/red. It’s just not-Yellow or yes-Yellow. See Stephen Levy’s In The Plex (2011) pg186: ‘A Code Yellow is named after a tank top of that color owned by engineering director Wayne Rosing. During Code Yellow a leader is given the shirt and can tap anyone at Google and force him or her to drop a current project to help out. Often, the Code Yellow leader escalates the emergency into a war room situation and pulls people out of their offices and into a conference room for a more extended struggle.’ Really? I thought the popularization of “yellow” as a caution or warning became a shared understanding in the US with the advent of trains long before T shirts and Google. Note: Train professionals used a signaling system before Messrs. Brin and Page “discovered” Jon Kleinberg’s CLEVER patent.

lizzas says: “24/7 oncall to … be yanked onto something the boss fancies. No thanks. What about… planning?” Planning. Let’s call a meeting, talk about a plan, then have a meeting to discuss options, and finally have a meeting to do planning. Sounds like a plan.

I have a headache from the flashing yellow lights. Amazing about Google’s originality, isn’t it? Oh, over the holiday downtime, check out Dr. Jon Kleinberg and what he was doing at IBM’s Almaden Research Laboratory in US6112202, filed in 1997. Are those yellow lights still flashing?

Stephen E Arnold, December 26, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta