WhatsApp: Chasing More Money

January 1, 2025

Meta aims to make WhatsApp indispensable to businesses around the world. The app is currently responsible for just a fraction of the company’s revenue, but Zuckerberg seems to have high hopes for the messaging platform. Rest of World‘s thorough piece, “How WhatsApp Ate the World,” describes the plan. Writer Issie Lapowsky details the app’s evolution since Facebook (now Meta) bought it and examines where the company plans to take it from here. We learn:

“WhatsApp initially achieved that global dominance in large part by doing just one thing very well: enabling cheap, private, and reliable messaging on almost any phone, almost anywhere in the world. But in the decade since Meta acquired WhatsApp for an eye-watering $22 billion in 2014, the app has been transformed from a narrowly focused utilitarian tool into a sort of ‘everything app.’ In countries like India, Brazil, Mexico, and Indonesia, WhatsApp is now also a place for scheduling doctor’s appointments and conducting real estate deals — and buying Sabharwal’s ceramic ducks. In Brazil, the beauty juggernaut L’Oréal now makes an average of 25% of its online direct-to-consumer sales on WhatsApp. The shift has been driven, of course, by money. WhatsApp has never been much of a moneymaker. While Meta makes billions off mining people’s personal data to sell more ads, WhatsApp is an encrypted app, whose founders once very publicly swore off advertising altogether. Lately, however, WhatsApp has been aggressively luring big businesses to its suite of paid messaging products for businesses, and openly flirting with the possibility of introducing ads in the not-too-distant future.”

Because of course it is. Meta insists it respects WhatsApp’s original mission of privacy, pledging to keep its end-to-end encryption intact. The company has even added privacy tools that remind us of the old Telegram:  disappearing messages, encrypting backups, and shielding IP addresses in calls. Is Meta attempting to move forward by stepping into the past? Even with these privacy promises, Lapowsky notes:

“And yet, with each new revenue-boosting feature, WhatsApp has added a little asterisk to its core privacy promises, according to Nathalie Maréchal, co-director of the privacy and data program at the Center for Democracy & Technology in Washington, D.C. ‘It’s not necessarily that those asterisks are illegitimate. It’s that they’re complicated,’ she told Rest of World, ‘and many users are either not going to take the time, or aren’t going to prioritize, fully understanding it.'”

Ah, details. Another key part of Zuck’s vision is no surprise—generative AI. Meta’s chatbot is now a standard part of the app’s search bar, while a customer-service version and AI marketing tools are now available to businesses. Will all these changes turn WhatsApp into the moneymaker the tech mogul envisions?

Cynthia Murrell, January 1, 2025

The US and Math: Not So Hot

January 1, 2025

In recent decades, the US educational system has increasingly emphasized teaching to the test over niceties like critical thinking and deep understanding. How is that working out for us? Not well. Education news site Chalkbeat reports, "U.S. Math Scores Drop on Major International Test."

Last year, the Trends in International Mathematics and Science Study assessed over 650,000 fourth and eighth graders in 64 countries. The test is performed every four years, and its emphasis is on foundational skills in those subjects. Crucial knowledge for our young people to have, not just for themselves but for the future of the country. That future is not looking so good. The write-up includes a chart of the rankings, with the U.S. now squarely in the middle. We learn:

"U.S. fourth graders saw their math scores drop steeply between 2019 and 2023 on a key international test even as more than a dozen other countries saw their scores improve. Scores dropped even more steeply for American eighth graders, a grade where only three countries saw increases. The declines in fourth grade mathematics in the U.S. were among the largest in the participating countries, though American students are still in the middle of the pack internationally. The extent of the decline seems to be driven by the lowest performing students losing more ground, a worrying trend that predates the pandemic."

So we can’t just blame this on the pandemic, when schools were shuttered and students "attended" classes remotely. A pity. The results are no surprise to many who have been sounding alarm bells for years. So why not just drop perpetual testing and return to more effective instruction? It couldn’t have anything to do with corporate interests, could it? Naw, even the jaded and powerful must know the education of our youth is too important to put behind profits.

Cynthia Murrell, January 1, 2024

Chinese AI Lab Deepseek Grinds Ahead…Allegedly

December 31, 2024

Is the world’s most innovative AI company a low-profile Chinese startup? ChinaTalk examines “Deepseek: The Quiet Giant Leading China’s AI Race.” The Chinese-tech news site shares an annotated translation of a rare interview with DeepSeek CEO Liang Wenfeng. The journalists note the firm’s latest R1 model just outperformed OpenAI’s o1. In their introduction to the July interview, they write:

“Before Deepseek, CEO Liang Wenfeng’s main venture was High-Flyer, a top 4 Chinese quantitative hedge fund last valued at $8 billion. Deepseek is fully funded by High-Flyer and has no plans to fundraise. It focuses on building foundational technology rather than commercial applications and has committed to open sourcing all of its models. It has also singlehandedly kicked off price wars in China by charging very affordable API rates. Despite this, Deepseek can afford to stay in the scaling game: with access to High-Flyer’s compute clusters, Dylan Patel’s best guess is they have upwards of ‘50k Hopper GPUs,’ orders of magnitude more compute power than the 10k A100s they cop to publicly. Deepseek’s strategy is grounded in their ambition to build AGI. Unlike previous spins on the theme, Deepseek’s mission statement does not mention safety, competition, or stakes for humanity, but only ‘unraveling the mystery of AGI with curiosity’. Accordingly, the lab has been laser-focused on research into potentially game-changing architectural and algorithmic innovations.”

For example, we learn:

“They proposed a novel MLA (multi-head latent attention) architecture that reduces memory usage to 5-13% of the commonly used MHA architecture. Additionally, their original DeepSeekMoESparse structure minimized computational costs, ultimately leading to reduced overall costs.”

Those in Silicon Valley are well aware of this “mysterious force from the East,” with several AI head honchos heaping praise on the firm. The interview is split into five parts. The first examines the large-model price war set off by Deepseek’s V2 release. Next, Wenfeng describes how an emphasis on innovation over imitation sets his firm apart but, in part three, notes that more money does not always lead to more innovation. Part four takes a look at the talent behind DeepSeek’s work, and in part five the CEO looks to the future. Interested readers should check out the full interview. Headquartered in Hangzhou, China, the young firm was founded in 2023.

Cynthia Murrell, December 31, 2024

Technical Debt: A Weight Many Carry Forward to 2025

December 31, 2024

Do you know what technical debt is? It’s also called deign debt and code debt. It refers to a development team prioritizing a project’s delivery over a functional product and the resulting consequences. Usually the project has to be redone. Data debt is a type of technical debt and it refers to the accumulated costs of poor data management that hinder decision-making and efficiency. Which debt is worse? The Newstack delves into that topic in: “Who’s the Bigger Villain? Data Debt vs. Technical Debt.”

Technical debt should only be adopted for short-term goals, such as meeting a release date, but it shouldn’t be the SOP. Data debt’s downside is that it results in poor data and manual management. It also reduces data quality, slows decision making, and increases costs. The pair seem indistinguishable but the difference is that with technical debt you can quit and start over. That’s not an option with data debt and the ramifications are bad:

“Reckless and unintentional data debt emerged from cheaper storage costs and a data-hoarding culture, where organizations amassed large volumes of data without establishing proper structures or ensuring shared context and meaning. It was further fueled by resistance to a design-first approach, often dismissed as a potential bottleneck to speed. It may also have sneaked up through fragile multi-hop medallion architectures in data lakes, warehouses, and lakehouses.”

The article goes on to recommend adopting early data-modeling and how to restructure your current systems. You do that by drawing maps or charts of your data, then project where you want them to go. It’s called planning:

“To reduce your data debt, chart your existing data into a transparent, comprehensive data model that maps your current data structures. This can be approached iteratively, addressing needs as they arise — avoid trying to tackle everything at once.

Engage domain experts and data stakeholders in meaningful discussions to align on the data’s context, significance, and usage.

From there, iteratively evolve these models — both for data at rest and data in motion—so they accurately reflect and serve the needs of your organization and customers.

Doing so creates a strong foundation for data consistency, clarity, and scalability, unlocking the data’s full potential and enabling more thoughtful decision-making and future innovation.”

Isn’t this just good data, project, or organizational management? Charting is a basic tool taught in kindergarten. Why do people forget it so quickly?

Whitney Grace, December 31, 2024

Microservices Are Perfect, Are They Not?

December 31, 2024

“Microservices” is another synergetic jargon term that is looping the IT industry like the latest viral video. Microservices are replacing monolithic architecture and are supposed to resolve all architectural problems. Cerbos’s article says otherwise: “The Value Of Monitoring And Observability In Microservices, And Associated Challenges.” The article is part of a ten part series that focuses on how to effectively handle any challenges during a transfer from a monolithic architecture to microservices.

This particular article is chapter five and it stresses how observability and monitoring are important to know what is happening on every level of a microservices application. This is important because a microservices environment has multiple tasks concurrently running and it makes traditional tools obsolete. Observability means using tools to observe the system’s internal status, while monitoring tools collect and analyze traces, logs, and metrics. When the two are combined, it provides an overall consensus of a system’s health. The challenges of installing monitoring and observability tools in a microservices architecture are as follows:

1. “Interaction of data silos. Treating each microservice separately when implementing monitoring and observability solutions creates “data silos”. These silos are easy to understand in isolation, without fully understanding how they interact as one. This can lead to difficulty when debugging or understanding the root cause of problems.

2. Scalability. As your microservices architecture scales, the complexity of monitoring and observability grows with it. So monitoring everything with the same tools you were using for a single monolith quickly becomes unmanageable.

3. Lack of standard tools. One of the benefits of microservices is that different teams can choose the data storage system that makes the most sense for their microservice (as we covered in blog 2 of the series, “Data management and consistency”). But, if you don’t have a standard for monitoring and observability, tying siloed insights together to gain insights on the system as a whole is challenging.”

The foundations for observability are installing tools that track metrics, logging, and tracing. Metrics are quantitative measurements of a system that include: error rates, throughput, resource utilization, and response time. These provide a system’s overall performance. Logging means capturing and centralizing log messages made by services like applications. Tracing follows end-to-end requests between services. They provide valuable insights into potential bottlenecks and errors.

This article verifies what we already know with every new technology adoption: same problems, new packaging. There isn’t any solution that will solve all technology problems. New technology has its own issues that will resolve old problems but bring up new ones. There’s no such thing as a one stop shop.

Whitney Grace, December 31, 2024

Google: Making a Buck Is the Name of the Game

December 30, 2024

animated-dinosaur-image-0049This blog post was crafted by a still-living dinobaby.

This is a screenshot of YouTube with an interesting advertisement. Take a look:

image

Here’s a larger version of the ad:

image

Now here’s the landing page for the teaser which looks like a link to a video:

image

The site advertising on YouTube.com is Badgeandwallet.com. The company offers a number of law enforcement related products. Here’s a sample of the badges available to a person exploring the site:

image

How many law enforcement officers are purchasing badges from an ad on YouTube? At some US government facilities, shops will provide hats and jackets with agency identification on them. However, to make a purchase, a visitor to the store must present current credentials.

YouTube.com and its parent are under scrutiny for a number of the firm’s business tactics. I reacted negatively to the inclusion of this advertisement in search results related to real estate in Beverly Hills, California.

Is Google the brilliant smart software company it says it is, or is the company just looking to make a buck with ads likely to be viewed by individuals who have little or nothing to do with law enforcement or government agencies?

I hope that 2025 will allow Google to demonstrate that it wants to be viewed as a company operating with a functioning moral compass. My hunch is that I will be disappointed as I have been with quantum supremacy and Googley AI.

Stephen E Arnold, December 30, 2025

AI Video Is Improving: Hello, Hollywood!

December 30, 2024

Has AI video gotten scarily believable? Well, yes. For anyone who has not gotten the memo, The Guardian declares, “Video Is AI’s New Frontier—and It Is so Persuasive, We Should All Be Worried.” Writer Victoria Turk describes recent developments:

“Video is AI’s new frontier, with OpenAI finally rolling out Sora in the US after first teasing it in February, and Meta announcing its own text-to-video tool, Movie Gen, in October. Google made its Veo video generator available to some customers this month. Are we ready for a world in which it is impossible to discern which of the moving images we see are real?”

Ready or not, here it is. No amount of hand-wringing will change that. Turk mentions ways bad actors abuse the technology: Scammers who impersonate victims’ loved ones to extort money. Deepfakes created to further political agendas. Fake sexual images and videos featuring real people. She also cites safeguards like watermarks and content restrictions as evidence AI firms understand the potential for abuse.

But the author’s main point seems to be more philosophical. It was prompted by convincing fake footage of a tree frog, documentary style. She writes:

“Yet despite the technological feat, as I watched the tree frog I felt less amazed than sad. It certainly looked the part, but we all knew that what we were seeing wasn’t real. The tree frog, the branch it clung to, the rainforest it lived in: none of these things existed, and they never had. The scene, although visually impressive, was hollow.”

Turk also laments the existence of this Meta-made baby hippo, which she declares is “dead behind the eyes.” Is it though? Either way, these experiences led Turk to ponders a bleak future in which one can never know which imagery can be trusted. She concludes with this anecdote:

“I was recently scrolling through Instagram and shared a cute video of a bunny eating lettuce with my husband. It was a completely benign clip – but perhaps a little too adorable. Was it AI, he asked? I couldn’t tell. Even having to ask the question diminished the moment, and the cuteness of the video. In a world where anything can be fake, everything might be.”

That is true. An important point to remember when we see footage of a politician doing something horrible. Or if we get a distressed call from a family member begging for money. Or if we see a cute animal video but prefer to withhold the dopamine rush lest it turn out to be fake.

Cynthia Murrell, December 30, 2024

Geolocation Data: Available for a Price

December 30, 2024

According to a report from 404 Media, a firm called Fog Data Science is helping law enforcement compile lists of places visited by suspects. Ars Technica reveals, “Location Data Firm Helps Police Find Out When Suspects Visited their Doctor.” Writer Jon Brodkin writes:

“Fog Data Science, which says it ‘harness[es] the power of data to safeguard national security and provide law enforcement with actionable intelligence,’ has a ‘Project Intake Form’ that asks police for locations where potential suspects and their mobile devices might be found. The form, obtained by 404 Media, instructs police officers to list locations of friends’ and families’ houses, associates’ homes and offices, and the offices of a person’s doctor or lawyer. Fog Data has a trove of location data derived from smartphones’ geolocation signals, which would already include doctors’ offices and many other types of locations even before police ask for information on a specific person. Details provided by police on the intake form seem likely to help Fog Data conduct more effective searches of its database to find out when suspects visited particular places. The form also asks police to identify the person of interest’s name and/or known aliases and their ‘link to criminal activity.’ ‘Known locations a POI [Person of Interest] may visit are valuable, even without dates/times,’ the form says. It asks for street addresses or geographic coordinates.”

See the article for an image of the form. It is apparently used to narrow down data points and establish suspects’ routine movements. It could also be used to, say, prosecute abortions, Brodkin notes.

Back in 2022, the Electronic Frontier Foundation warned of Fog Data’s geolocation data horde. Its report detailed which law enforcement agencies were known to purchase Fog’s intel at the time. But where was Fog getting this data? From Venntel, the EFF found, which is the subject of a Federal Trade Commission action. The agency charges Venntel with “unlawfully tracking and selling sensitive location data from users, including selling data about consumers’ visits to health-related locations and places of worship.” The FTC’s order would prohibit Venntel, and parent company Gravy Analytics, from selling sensitive location data. It would also require they establish a “sensitive data location program.” We are not sure what that would entail. And we might never know: the decision may not be finalized until after the president-elect is sworn in.

Cynthia Murrell, December 30, 2024

Debbie Downer Says, No AI Payoff Until 2026

December 27, 2024

Holiday greetings from the Financial Review. Its story “Wall Street Needs to Prepare for an AI Winter” is a joyous description of what’s coming down the Information Highway. The uplifting article sings:

shovelling more and more data into larger models will only go so far when it comes to creating “intelligent” capabilities, and we’ve just about arrived at that point. Even if more data were the answer, those companies that indiscriminately vacuumed up material from any source they could find are starting to struggle to acquire enough new information to feed the machine.

Translating to rural Kentucky speak: “We been shoveling in the horse stall and ain’t found the nag yet.”

The flickering light bulb has apparently illuminated the idea that smart software is expensive to develop, train, optimize, run, market, and defend against allegations of copyright infringement.

To add to the profit shadow, Debbie Downer’s cousin compared OpenAI to Visa. The idea in “OpenAI Is Visa” is that Sam AI-Man’s company is working overtime to preserve its lead in AI and become a monopoly before competitors figure out how to knock off OpenAI. The write up says:

Either way, Visa and OpenAI seem to agree on one thing: that “competition is for losers.”

Too add to the uncertainty about US AI “dominance,” Venture Beat reports:

DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch.

Does that suggest that the squabbling and mud wrestling among US firms can be body slammed by the Chinese AI grapplers are more agile? Who knows. However, in a series of tweets, DeepSeek suggested that its “cost” was less than $6 million. The idea is that what Chinese electric car pricing is doing to some EV manufacturers, China’s AI will do to US AI. Better and faster? I don’t know but that “cheaper” angle will resonate with those asked to pump cash into the Big Dogs of US AI.

In January 2023, many were struck by the wonders of smart software. Will the same festive atmosphere prevail in 2025?

Stephen E Arnold, December 27, 2024

OpenAI Partners with Defense Startup Anduril to Bring AI to US Military

December 27, 2024

animated-dinosaur-image-0062_thumb_thumbNo smart software involved. Just a dinobaby’s work.

We learn from the Independent that “OpenAI Announces Weapons Company Partnership to Provide AI Tech to Military.” The partnership with Anduril represents an about-face for OpenAI. This will excite some people, scare others, and lead to remakes of the “Terminator.” Beyond Search thinks that automated smart death machines are so trendy. China also seems enthused. We learn:

“‘ChatGPT-maker OpenAI and high-tech defense startup Anduril Industries will collaborate to develop artificial intelligence-inflected technologies for military applications, the companies announced. ‘U.S. and allied forces face a rapidly evolving set of aerial threats from both emerging unmanned systems and legacy manned platforms that can wreak havoc, damage infrastructure and take lives,’ the companies wrote in a Wednesday statement. ‘The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.’ The companies framed the alliance as a way to secure American technical supremacy during a ‘pivotal moment’ in the AI race against China. They did not disclose financial terms.”

Of course not. Tech companies were once wary of embracing military contracts, but it seems those days are over. Why now? The article observes:

“The deals also highlight the increasing nexus between conservative politics, big tech, and military technology. Palmer Lucky, co-founder of Anduril, was an early, vocal supporter of Donald Trump in the tech world, and is close with Elon Musk. … Vice-president-elect JD Vance, meanwhile, is a protege of investor Peter Thiel, who co-founded Palantir, another of the companies involved in military AI.”

“Involved” is putting it lightly. And as readers may have heard, Musk appears to be best buds with the president elect. He is also at the head of the new Department of Government Efficiency, which sounds like a federal agency but is not. Yet. The commission is expected to strongly influence how the next administration spends our money. Will they adhere to multinational guidelines on military use of AI? Do PayPal alums have any hand in this type of deal?

Cynthia Murrell, December 27, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta