The Very Expensive AI Horse Race

December 4, 2024

animated-dinosaur-image-0065This write up is from a real and still-alive dinobaby. If there is art, smart software has been involved. Dinobabies have many skills, but Gen Z art is not one of them.

One of the academic nemeses of smart software is a professional named Gary Marcus. Among his many intellectual accomplishments is cameo appearance on a former Jack Benny child star’s podcast. Mr. Marcus contributes his views of smart software to the person who, for a number of years, has been a voice actor on the Simpsons cartoon.

image

The big four robot stallions are racing to a finish line. Is the finish line moving away from the equines faster than the steeds can run? Thanks, MidJourney. Good enough.

I want to pay attention to Mr. Marcus’ Substack post “A New AI Scaling Law Shell Game?” The main idea is that the scaling law has entered popular computer jargon. Once the lingo of Galileo, scaling law now means that AI, like CPUs, are part of the belief that technology just gets better as it gets bigger.

In this essay, Mr. Marcus asserts that getting bigger may not work unless humanoids (presumably assisted by AI0 innovate other enabling processes. Mr. Marcus is aware of the cost of infrastructure, the cost of electricity, and the probable costs of exhausting content.

From my point of view, a bit more empirical “evidence” would be useful. (I am aware of academic research fraud.) Also, Mr. Marcus references me when he says keep your hands on your wallet. I am not sure that a fix is possible. The analogy is the old chestnut about changing a Sopwith Camel’s propeller when the aircraft is in a dogfight and the synchronized machine gun is firing through the propeller.

I want to highlight one passage in Mr. Marcus’ essay and offer a handful of comments. Here’s the passage I noted:

Over the last few weeks, much of the field has been quietly acknowledging that recent (not yet public) large-scale models aren’t as powerful as the putative laws were predicting. The new version is that there is not one scaling law, but three: scaling with how long you train a model (which isn’t really holding anymore), scaling with how long you post-train a model, and scaling with how long you let a given model wrestle with a given problem (or what Satya Nadella called scaling with “inference time compute”).

I think this is a paragraph I will add to my quotes file. The reasons are:

First, investors, would be entrepreneurs, and giant outfits really want a next big thing. Microsoft fired the opening shot in the smart software war in early 2023. Mr. Nadella suggested that smart software would be the next big thing for Microsoft. The company has invested in making good on this statement. Now Microsoft 365 is infused with smart software and Azure is burbling with digital glee with its “we’re first” status. However, a number of people have asked, “Where’s the financial payoff?” The answer is standard Silicon Valley catechism: The payoff is going to be huge. Invest now.” If prayers could power hope, AI is going to be hyperbolic just like the marketing collateral for AI promises. But it is almost 2025, and those billions have not generated more billions and profit for the Big Dogs of AI. Just sayin’.

Second, the idea that the scaling law is really multiple scaling laws is interesting. But if one scaling law fails to deliver, what happens to the other scaling laws? The interdependencies of the processes for the scaling laws might evoke new, hitherto identified scaling laws. Will each scaling law require massive investments to deliver? Is it feasible to pay off the investments in these processes with the original concept of the scaling law as applied to AI. I wonder if a reverse Ponzi scheme is emerging. The more pumped in the smaller the likelihood of success. Is AI a demonstration of convergence or The mathematical property you’re describing involves creating a sequence of fractions where the numerator is 1 and the denominator is an increasing sequence of integers. Just askin’.

Third, the performance or knowledge payoff I have experienced with my tests of OpenAI and the software available to me on You.com makes clear that the systems cannot handle what I consider routine questions. A recent example was my request to receive a list of the exhibitors at the November 1 Gateway Conference held in Dubai for crypto fans of Telegram’s The Open Network Foundation and TON Social. The systems were unable to deliver the lists. This is just one notable failure which a humanoid on my research team was able to rectify in an expeditious manner. (Did you know the Ku Group was on my researcher’s list?) Just reportin’.

Net net: Will AI repay the billions sunk into the data centers, the legal fees (many still looming), the staff, and the marketing? If you ask an accelerationist, the answer is, “Absolutely.” If you ask a dinobaby, you may hear, “Maybe, but some fundamental innovations are going to be needed.” If you ask an AI will kill us all type like the Xoogler Mo Gawdat, you will hear, “Doom looms.”  Just dinobabyin’.

Stephen E Arnold, December 4, 2024

The Golden Fleecer of the Year: Boeing

November 29, 2024

When I was working in Washington, DC, I had the opportunity to be an “advisor” to the head of the Joint Committee on Atomic Energy. I recall a comment by Craig Hosmer (R. California) and retired rear admiral saying, “Those Air Force guys overpay.” The admiral was correct, but I think that other branches of the US Department of Defense have been snookered a time or two.

In the 1970s and 1980s, Senator William Proxmire (D. Wisconsin) had one of his staff keep an eye of reports about wild and crazy government expenditures. Every year, the Senator reminded people of a chivalric award dating allegedly from the 1400s. Yep, the Middle Ages in DC.

The Order of the Golden Fleece in old timey days of yore meant the recipient received a snazzy chivalric order intended to promote Christian values and the good neighbor policy of Spain and Austria. A person with the fleece was important, a bit like a celebrity arriving at a Hollywood Oscar event. (Yawn)

image

Thanks, Wikipedia. Allegedly an example of a chivalric Golden Fleece. Yes, that is a sheep, possibly dead or getting ready to be dipped. Thanks,

Reuters, the trusted outfit which tells me it is trusted each time I read one of its “real” news stories, published “Boeing Overcharged Air Force Nearly 8,000% for Soap Dispensers, Watchdog Alleges.” The write up stated in late October 2024:

Boeing overcharged the U.S. Air Force for spare parts for C-17 transport planes, including marking up the price on soap dispensers by 7,943%, according to a report by a Pentagon watchdog. The Department of Defense Office of Inspector General said on Tuesday the Air Force overpaid nearly $1 million for a dozen spare parts, including $149,072 for an undisclosed number of lavatory soap dispensers from the U.S. plane maker and defense contractor.

I have heard that the Department of Defense has not been able to monitor some of its administrative activities or complete an audit of what it does with its allocated funds.

According to the trusted write up:

The Pentagon’s budget is huge, breaking $900 billion last year, making overcharges by defense contractors a regular headache for internal watchdogs, but one that is difficult to detect. The Inspector General also noted it could not determine if the Air Force paid a fair price on $22 million of spare parts because the service did not keep a database of historical prices, obtain supplier quotes or identify commercially similar parts.

My view is that one of the elected officials in Washington, DC, should consider reviving the Proxmire Golden Fleece Award. Boeing may qualify, but there may be other contenders for the award as well.

I quite like the idea of scope changes and engineering change orders for some US government projects. But I have to admit that Senator Proxmire’s identification of a $600 hammer sold to the US Department of Defense is not interesting.

That 8,000 percent mark up is pretty nifty. Oh, on Amazon soap dispensers cost between $20 and $100. Should the Reuters’ story have mentioned:

  1. Procurement reform
  2. Poor financial controls
  3. Lack of common sense?

Of course not! The trust outfit does not get mired in silly technicalities. And Boeing? That outfit is doing a bang up job.

Stephen E Arnold, November 29, 2024

AI and Efficiency: What Is the Cost of Change?

November 18, 2024

dino orange_thumb_thumb_thumb_thumbNo smart software. Just a dumb dinobaby. Oh, the art? Yeah, MidJourney.

Companies are embracing smart software. One question which gets from my point of view little attention is, “What is the cost of changing an AI system a year or two down the road?” The focus at this time is getting some AI up and running so an organization can “learn” whether AI works or not. A parallel development is taking place in software vendors enterprise and industry-centric specialized software. Examples range from a brand new AI powered accounting system to Microsoft “sticking” AI into the ASCII editor Notepad.

image

Thanks, MidJourney. Good enough.

Let’s tally the costs which an organization faces 24 months after flipping the switch in, for example, a hospital chain which uses smart software to convert a physician’s spoken comments about a patient to data which can be used for analysis to provide insight into evidence based treatment for the hospital’s constituencies.

Here are some costs for staff, consultants, and lawyers:

  1. Paying for the time required to figure out what is on the money and what is not good or just awful like dead patients
  2. The time required to figure out if the present vendor can fix up the problem or a new vendor’s system must be deployed
  3. Going through the smart software recompete or rebid process
  4. Getting the system up and running
  5. The cost of retraining staff
  6. Chasing down dependencies like other third party software for the essential “billing process”
  7. Optimizing the changed or alternative system.

The enthusiasm for smart software makes talking about these future costs fade a little.

I read “AI Makes Tech Debt More Expensive,” and I want to quote one passage from the pretty good essay:

In essence, the goal should be to unblock your AI tools as much as possible. One reliable way to do this is to spend time breaking your system down into cohesive and coherent modules, each interacting through an explicit interface. A useful heuristic for evaluating a set of modules is to use them to explain your core features and data flows in natural language. You should be able to concisely describe current and planned functionality. You might also want to set up visibility and enforcement to make progress toward your desired architecture. A modern development team should work to maintain and evolve a system of well-defined modules which robustly model the needs of their domain. Day-to-day feature work should then be done on top of this foundation with maximum leverage from generative AI tooling.

Will organizations make this shift? Will the hyperbolic AI marketers acknowledge the future costs of pasting smart software on existing software like circus posters on crumbling walls?

Nope.

Those two year costs will be interesting for the bean counters when those kicked cans end up in their workspaces.

Stephen E Arnold, November 18, 2024

Let Them Eat Cake or Unplug: The AI Big Tech Bro Effect

November 7, 2024

I spotted a news item which will zip right by some people. The “real” news outfit owned by the lovable Jeff Bezos published “As Data Centers for AI Strain the Power Grid, Bills Rise for Everyday Customers.” The write up tries to explain that AI costs for electric power are being passed along to regular folks. Most of these electricity dependent people do not take home paychecks with tens of millions of dollars like the Nadellas, the Zuckerbergs, or the Pichais type breadwinners do. Heck, these AI poohbahs think about buying modular nuclear power plants. (I want to point out that these do not exist and may not for many years.)

The article is not going to thrill the professionals who are experts on utility demand and pricing. Those folks know that the smart software poohbahs have royally screwed up some weekends and vacations for the foreseeable future.

The WaPo article (presumably blessed by St. Jeffrey) says:

The facilities’ extraordinary demand for electricity to power and cool computers inside can drive up the price local utilities pay for energy and require significant improvements to electric grid transmission systems. As a result, costs have already begun going up for customers — or are about to in the near future, according to utility planning documents and energy industry analysts. Some regulators are concerned that the tech companies aren’t paying their fair share, while leaving customers from homeowners to small businesses on the hook.

Okay, typical “real” journospeak. “Costs have already begun going up for customers.” Hey, no kidding. The big AI parade began with the January 2023 announcement that the Softies were going whole hog on AI. The lovable Google immediately flipped into alert mode. I can visualize flashing yellow LEDs and faux red stop lights blinking in the gray corridors in Shoreline Drive facilities if there are people in those offices again. Yeah, ghostly blinking.

The write up points out, rather unsurprisingly:

The tech firms and several of the power companies serving them strongly deny they are burdening others. They say higher utility bills are paying for overdue improvements to the power grid that benefit all customers.

Who wants PEPCO and VEPCO to kill their service? Actually, no one. Imagine life in NoVa, DC, and the ever lovely Maryland without power. Yikes.

From my point of view, informed by some exposure to the utility sector at a nuclear consulting firm and then at a blue chip consulting outfit, here’s the scoop.

The demand planning done with rigor by US utilities took a hit each time the Big Dogs of AI brought more specialized, power hungry servers online and — here’s the killer, folks — and left them on. The way power consumption used to work is that during the day, consumer usage would fall and business/industry usage would rise. The power hogging steel industry was a 24×7 outfit. But over the last 40 years, manufacturing has wound down and consumer demand crept upwards. The curves had to be plotted and the demand projected, but, in general, life was not too crazy for the US power generation industry. Sure, there were the costs associated with decommissioning “old” nuclear plants and expanding new non-nuclear facilities with expensive but management environmental gewgaws, gadgets, and gizmos plugged in to save the snail darters and the frogs.

Since January 2023, demand has been curving upwards. Power generation outfits don’t want to miss out on revenue. Therefore, some utilities have worked out what I would call sweetheart deals for electricity for AI-centric data centers. Some of these puppies suck more power in a day than a dying city located in Flyover Country in Illinois.

Plus, these data centers are not enough. Each quarter the big AI dogs explain that more billions will be pumped into AI data centers. Keep in mind: These puppies run 24×7. The AI wolves have worked out discount rates.

What do the US power utilities do? First, the models have to be reworked. Second, the relationships to trade, buy, or “borrow” power have to be refined. Third, capacity has to be added. Fourth, the utility rate people create a consumer pricing graph which may look like this:

image

Guess who will pay? Yep, consumers.

The red line is the prediction for post-AI electricity demand. For comparison, the blue line shows the demand curve before Microsoft ignited the AI wars. Note that the gray line is consumer cost or the monthly electricity bill for Bob and Mary Normcore. The nuclear purple line shows what is and will continue to happen to consumer electricity costs. The red line is the projected power demand for the AI big dogs.

The graph shows that the cost will be passed to consumers. Why? The sweetheart deals to get the Big Dog power generation contracts means guaranteed cash flow and a hurdle for a low-ball utility to lumber over. Utilities like power generation are not the Neon Deions of American business.

There will be hand waving by regulators. Some city government types will argue, “We need the data centers.” Podcasts and posts on social media will sprout like weeds in an untended field.

Net net: Bob and Mary Normcore may have to decide between food and electricity. AI is wonderful, right.

Stephen E Arnold, November 7, 2024

Dreaming about Enterprise Search: Hope Springs Eternal…

November 6, 2024

dino orange_thumbThe post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.

Enterprise search is back, baby. The marketing lingo is very year 2003, however. The jargon has been updated, but the story is the same: We can make an organization’s information accessible. Instead of Autonomy’s Neurolinguistic Programming, we have AI. Instead of “just text,” we have video content processed. Instead of filters, we have access to cloud-stored data.

image

An executive knows he can crack the problem of finding information instantly. The problem is doing it so that the time and cost of data clean up does not cost more than buying the Empire State Building. Thanks, Stable Diffusion. Good enough.

A good example of the current approach to selling the utility of an enterprise search and retrieval system is the article / interview in Betanews called “How AI Is Set to Democratize Information.” I want to be upfront. I am a mostly aligned with the analysis of information and knowledge presented by Taichi Sakaiya. His The Knowledge Value Revolution or a History of the Future has been a useful work for me since the early 1990s. I was in Osaka, Japan, lecturing at the Kansai Institute of Technology when I learned of this work book from my gracious hosts and the Managing Director of Kinokuniya (my sponsor). Devaluing knowledge by regressing to the fat part of a Gaussian distribution is not something about which I am excited.

However, the senior manager of Pyron (Raleigh, North Carolina), an AI-powered information retrieval company, finds the concept in line with what his firm’s technology provides to its customers.  The article includes this statement:

The concept of AI as a ‘knowledge cloud’ is directly tied to information access and organizational intelligence. It’s essentially an interconnected network of systems of records forming a centralized repository of insights and lessons learned, accessible to individuals and organizations.

The benefit is, according to the Pyron executive:

By breaking down barriers to knowledge, the AI knowledge cloud could eliminate the need for specialized expertise to interpret complex information, providing instant access to a wide range of topics and fields.

The article introduces a fresh spin on the problems of information in organizations:

Knowledge friction is a pervasive issue in modern enterprises, stemming from the lack of an accessible and unified source of information. Historically, organizations have never had a singular repository for all their knowledge and data, akin to libraries in academic or civic communities. Instead, enterprise knowledge is scattered across numerous platforms and systems — each managed by different vendors, operating in silos.

Pyron opened its doors in 2017. After seven years, the company is presenting a vision of what access to enterprise information could, would, and probably should do.

The reality, based on my experience, is different. I am not talking about Pyron now. I am discussing the re-emergence of enterprise search as the killer application for bolting artificial intelligence to information retrieval. If you are in love with AI systems from oligopolists, you may want to stop scanning this blog post. I do not want to be responsible for a stroke or an esophageal spasm. Here we go:

  1. Silos of information are an emergent phenomenon. Knowledge has value. Few want to make their information available without some value returning to them. Therefore, one can talk about breaking silos and democratization, but those silos will be erected and protected. Secret skunk works, mislabeled projects, and squirreling away knowledge nuggets for a winter’s day. In the case of Senator Everett Dirksen, the information was used to get certain items prioritized. That’s why there is a building named after him.
  2. The “value” of information or knowledge depends on another person’s need. A database which contains the antidote to save a child from a household poisoning costs money to access. Why? Desperate people will pay. The “information wants to free” idea is not one that makes sense to those with information and the knowledge to derive value from what another finds inscrutable. I am not sure that “democratizing information” meshes smoothly with my view.
  3. Enterprise search, with or without, hits some cost and time problems with a small number of what have been problems for more than 50 years. SMART failed, STAIRS III failed, and the hundreds of followers have failed. Content is messy. The idea that one can process text, spreadsheets, Word files, and email is one thing. Doing it without skipping wonky files or the time and cost of repurposing data remains difficult. Chemical companies deal with formulae; nuclear engineering firms deal with records management and mathematics; and consulting companies deal with highly paid people who lock up their information on a personal laptop. Without these little puddles of information, the “answer” or the “search output” will not be just a hallucination. The answer may be dead wrong.

I understand the need to whip up jargon like “democratize information”, “knowledge friction”, and “RAG frameworks”. The problem is that despite the words, delivering accurate, verifiable, timely on-point search results in response to a query is a difficult problem.

Maybe one of the monopolies will crack the problem. But most of output is a glimpse of what may be coming in the future. When will the future arrive? Probably when the next PR or marketing write up about search appears. As I have said numerous times, I find it more difficult to locate the information I need than at any time in my more than half a century in online information retrieval.

What’s easy is recycling marketing literature from companies who were far better at describing a “to be” system, not a “here and now” system.

Stephen E Arnold, November 4, 2024

Twenty Five Percent of How Much, Google?

November 6, 2024

dino orangeThe post is the work of a humanoid who happens to be a dinobaby. GenX, Y, and Z, read at your own risk. If art is included, smart software produces these banal images.

I read the encomia to Google’s quarterly report. In a nutshell, everything is coming up roses even the hyperbole. One news hook which has snagged some “real” news professionals is that “more than a quarter of new code at Google is generated by AI.” The exclamation point is implicit. Google’s AI PR is different from some other firms; for example, Samsung blames its financial performance disappointments on some AI. Winners and losers in a game in which some think the oligopolies are automatic winners.

image

An AI believer sees the future which is arriving “soon, real soon.” Thanks, You.com. Good enough because I don’t have the energy to work around your guard rails.

The question is, “How much code and technical debt does Google have after a quarter century of its court-described monopolistic behavior? Oh, that number is unknown. How many current Google engineers fool around with that legacy code? Oh, that number is unknown and probably for very good reasons. The old crowd of wizards has been hit with retirement, cashing in and cashing out, and “leadership” nervous about fiddling with some processes that are “good enough.” But 25 years. No worries.

The big news is that 25 percent of “new” code is written by smart software and then checked by the current and wizardly professionals. How much “new” code is written each year for the last three years? What percentage of the total Google code base is “new” in the years between 2021 and 2024? My hunch is that “new” is relative. I also surmise that smart software doing 25 percent of the work is one of those PR and Wall Street targeted assertions specifically designed to make the Google stock go up. And it worked.

However, I noted this Washington Post article: “Meet the Super Users Who Tap AI to Get Ahead at Work.” Buried in that write up which ran the mostly rah rah AI “real” news article coincident with Google’s AI spinning quarterly reports one interesting comment:

Adoption of AI at work is still relatively nascent. About 67 percent of workers say they never use AI for their jobs compared to 4 percent who say they use it daily, according to a recent survey by Gallup.

One can interpret this as saying, “Imagine the growth that is coming from reduced costs. Get rid of most coders and just use Google’s and other firms’ smart programming tools.

Another interpretation is, “The actual use is much less robust than the AI hyperbole machine suggests.”

Which is it?

Several observations:

  1. Many people want AI to pump some life into the economic fuel tank. By golly, AI is going to be the next big thing. I agree, but I think the Gallup data indicates that the go go view is like looking at a field of corn from a crop duster zipping along at 1,000 feet. The perspective from the airplane is different from the person walking amidst the stalks.
  2. The lack of data behind Google-type assertions about how much machine code is in the Google mix sounds good, but where are the data? Google, aren’t you data driven? So, where’s the back up data for the 25 percent assertion.
  3. Smart software seems to be something that is expensive, requires dreams of small nuclear reactors next to a data center adjacent a hospital. Yeah, maybe once the impact statements, the nuclear waste, and the skilled worker issues have been addressed. Soon as measured in environmental impact statement time which is different from quarterly report time.

Net net: Google desperately wants to be the winner in smart software. The company is suggesting that if it were broken apart by crazed government officials, smart software would die. Insert the exclamation mark. Maybe two or three. That’s unlikely. The blurring of “as is” with “to be” is interesting and misleading.

Stephen E Arnold, November 6, 2024

How to Cut Podcasts Costs and Hassles: A UK Example

November 5, 2024

Using AI to replicate a particular human is a fraught topic. Of paramount concern is the relentless issue of deepfakes. There are also legal issues of control over one’s likeness, of course, and concerns the technology could put humans out of work. It is against this backdrop, the BBC reports, that “Michael Parkinson’s Son Defends New AI Podcast.” The new podcast uses AI to recreate the late British talk show host, who will soon interview (human) guests. Son Mike acknowledges the concerns, but insists this project is different. Writer Steven McIntosh explains:

“Mike Parkinson said Deep Fusion’s co-creators Ben Field and Jamie Anderson ‘are 100% very ethical in their approach towards it, they are very aware of the legal and ethical issues, and they will not try to pass this off as real’. Recalling how the podcast was developed, Parkinson said: ‘Before he died, we [my father and I] talked about doing a podcast, and unfortunately he passed away before it came true, which is where Deep Fusion came in. ‘I came to them and said, ‘if we wanted to do this podcast with my father talking about his archive, is it possible?’, and they said ‘it’s more than possible, we think we can do something more’. He added his father ‘would have been fascinated’ by the project, although noted the broadcaster himself was a ‘technophobe’. Discussing the new AI version of his father, Parkinson said: ‘It’s extraordinary what they’ve achieved, because I didn’t really think it was going to be as accurate as that.’”

So they have the family’s buy-in, and they are making it very clear the host is remade with algorithms. The show is called “Virtually Parkinson,” after all. But there is still that replacing human talent with AI thing. Deep Fusion’s Anderson notes that, since Parkinson is deceased, he is in no danger of losing work. However, McIntosh counters, any guest that appears on this show may give one fewer interview to a show hosted by a different, living person. Good point.

One thing noteworthy about Deep Fusion’s AI on this project is its ability to not just put words in Parkinson’s mouth, but to predict how he would have actually responded. Assuming that function is accurate, we have a request: Please bring back the objective reporting of Walter Cronkite. This world sorely needs it.

Cynthia Murrell, November 5, 2024

Apple: Challenges Little and Bigly

October 28, 2024

dino orangeAnother post from a dinobaby. No smart software required except for the illustration.

At lunch yesterday (October 23, 2024), one of the people in the group had a text message with a long string of data. That person wanted to move the data from the text message into an email. The idea was copy a bit of ascii, put it in an email, and email the data to his office email account. Simple? He fiddled but could not get the iPhone to do the job. He showed me the sequence and when he went through the highlighting, the curly arrow, and the tap to copy, he was following the procedure. When he switched to email and pressed the text was not available. A couple of people tried to make this sequence of tapping and long pressing work. Someone handed the phone to me. I fooled around with it, asked the person to restart the phone, and went through the process. It took two tries but I got the snip of ASCII to appear in the email message. Yep, that’s the Apple iPhone. Everyone loves the way it works, except when it does not. The frustration the iPhone owner demonstrated illustrates the “good enough” approach to many functions in Apple’s and other firms’ software.

image

Will the normal course of events swamp this big time executive? Thanks, You.com. You were not creative, but you were good enough.

Why mention this?

Apple is a curious company. The firm has been a darling of cored fans, investors, and the MBA crowd. I have noted two actions related to Apple which suggest that the company may have a sleek exterior but the interior is different. Let’s look at these two recent developments.

The first item concerns what appear to be untoward behavior by Apple and those really good folks at Goldman Sachs. The Apple credit card received a statement showing that $89 million was due. The issue appears to be fumbling the ball with customers. For a well managed company, how does this happen? My view is that getting cute was not appreciated by some government authorities. A tiny mistake? Yes. The fine is miniscule compared to the revenue represented by the outstanding enterprises paying the fine. With small fines, have the Apple and Goldman Sachs professionals learned a lesson. Yes, get out of the credit card game. Other than that, I surmise that neither of the companies will veer from their game plans.

The second item is, from my point of view, a bit more interesting than credit cuteness. Apple, if the news report in the Washington Times, is close to the truth, is getting very comfortable with China. The basic idea is that Apple wants to invest in China. Is China the best friend forever of the US? I thought some American outfits were somewhat cautious with regard to their support of that nation state. Well, that does not appear to apply to China.

With the weird software, the credit card judgment, and the China love fest, we have three examples of a company operating in what I would describe as a fog of pragmatism. The copy paste issue makes clear that simplicity and attention to a common task on a widely used device is not important. The message for the iPhone is, “Figure out our way. Don’t even think about a meaningful, user centric change. Just upgrade and get the vapor of smart software.”

The message from the credit card judgment is, “Hey, we will do what we want. If there is a problem, send us a bill. We will continue to do what we want.” That shows me that Apple buys into the behavior pattern which makes Silicon Valley behavior the gold standard in management excellence.

My interpretation of the China-Apple BFF activity is that the policy of the US government is of little interest. Apple, like other large technology outfits, is effectively operating as a nation state. The company will do what it wants and let lawyer and PR people make the activity palatable.

I find it amusing that Apple appears to be reducing orders for its next big iPhone release. The market may be reaching a saturation point or the economic conditions in certain markets make lower cost devices more appealing. My own view is that the AI vapor spewed by Apple and other US companies is dissipating. Another utility function which does not work in a reliable way may not be enough.

Why not make copy paste more usable or is that a challenge beneath your vast aspirations?

Stephen E Arnold, October 28, 2024

Meta, Politics, and Money

October 24, 2024

Meta and its flagship product, Facebook, makes money from advertising. Targeted advertising using Meta’s personalization algorithm is profitable and political views seem to turn the money spigot. Remember the January 6 Riots or how Russia allegedly influenced the 2016 presidential election? Some of the reasons those happened was due to targeted advertising through social media like Facebook.

Gizmodo reviews how much Meta generates from political advertising in: “How Meta Brings In Millions Off Political Violence.” The Markup and CalMatters tracked how much money Meta made from Trump’s July assassination attempt via merchandise advertising. The total runs between $593,000 -$813,000. The number may understate the actual money:

“If you count all of the political ads mentioning Israel since the attack through the last week of September, organizations and individuals paid Meta between $14.8 and $22.1 million dollars for ads seen between 1.5 billion and 1.7 billion times on Meta’s platforms. Meta made much less for ads mentioning Israel during the same period the year before: between $2.4 and $4 million dollars for ads that were seen between 373 million and 445 million times.  At the high end of Meta’s estimates, this was a 450 percent increase in Israel-related ad dollars for the company. (In our analysis, we converted foreign currency purchases to current U.S. dollars.)”

The organizations that funded those ads were supporters of Palestine or Israel. Meta doesn’t care who pays for ads. Tracy Clayton is a Meta spokesperson and she said that ads go through a review process to determine if they adhere to community standards. She also that advertisers don’t run their ads during times of strife, because they don’t want their goods and services associates with violence.

That’s not what the evidence shows. The Markup and CalMatters researched the ads’ subject matter after the July assassination attempt. While they didn’t violate Meta’s guidelines, they did relate to the event. There were ads for gun holsters and merchandise about the shooting. It was a business opportunity and people ran with it with Meta holding the finish line ribbon.

Meta really has an interesting ethical framework.

Whitney Grace, October 24, 2024

Money and Open Source: Unpleasant Taste?

October 23, 2024

Open-source veteran and blogger Armin Ronacher ponders “The Inevitability of Mixing Open Source and Money.” It is lovely when developers work on open-source projects for free out of the goodness of their hearts. However, the truth is these folks can only afford to spend so much time working for free. (A major reason open source documentation is a mess, by the way.)

For his part, Ronacher helped launch Sentry’s Open Source Pledge. That initiative asks companies to pledge funding to open source projects they actively use. It is particularly focused on small projects, like xz, that have a tougher time attracting funds than the big names. He acknowledges the perils of mixing open source and money, as described by Word Press’s David Heinemeier Hansson. But he insists the blend is already baked in. He considers:

“At face value, this suggests that Open Source and money shouldn’t mix, and that the absence of monetary rewards fosters a unique creative process. There’s certainly truth to this, but in reality, Open Source and money often mix quickly. If you look under the cover of many successful Open Source projects you will find companies with their own commercial interests supporting them (eg: Linux via contributors), companies outright leading projects they are also commercializing (eg: MariaDB, redis) or companies funding Open Source projects primarily for marketing / up-sell purposes (uv, next.js, pydantic, …). Even when money doesn’t directly fund an Open Source project, others may still profit from it, yet often those are not the original creators. These dynamics create stresses and moral dilemmas.”

For example, the conflict between Hansson and WP Engine. The tension can also personal stress. Ronacher shares doubts that have plagued him: to monetize or not to monetize? Would a certain project have taken off had he poured his own money into it? He has watched colleagues wrestle with similar questions that affected their health and careers. See his post for more on those issues. The write-up concludes:

“I firmly believe that the current state of Open Source and money is inadequate, and we should strive for a better one. Will the Pledge help? I hope for some projects, but WordPress has shown that we need to drive forward that conversation of money and Open Source regardless of the size of the project.”

Clearly, further discussion is warranted. New ideas from open-source enthusiasts are also needed. Can a balance be found?

Cynthia Murrell, October 23, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta