Attention, New MBAs in Finance: AI-gony Arrives

March 6, 2025

dino orange_thumb_thumbAnother post from the dinobaby. Alas, no smart software used for this essay.

I did a couple of small jobs for a big Wall Street outfit years ago. I went to meetings, listened, and observed. To be frank, I did not do much work. There were three or four young, recent graduates of fancy schools. These individuals were similar to the colleagues I had at the big time consulting firm at which I worked earlier in my career.

Everyone was eager and very concerned that their Excel fevers were in full bloom: Bright eyes, earnest expressions, and a gentle but persistent panting in these meetings. Wall Street and Wall Street like firms in London, England, and Los Angeles, California, were quite similar. These churn outfits and deal makers shared DNA or some type of quantum entanglement.

These “analysts” or “associates” gathered data, pumped it into Excel spreadsheets set up by colleagues or technical specialists. Macros processed the data and spit out tables, charts, and graphs. These were written up as memos, reports for those with big sticks, or senior deciders.

My point is that the “work” was done by cannon fodder from well-known universities business or finance programs.

Well, bad news, future BMW buyers, an outfit called PublicView.ai may have curtailed your dreams of a six figure bonus in January or whatever month is the big momma at your firm. You can take a look at example outputs and sign up free at https://www.publicview.ai/.

If the smart product works as advertised, a category of financial work is going to be reshaped. It is possible that fewer analyst jobs will become available as the gathering and importing are converted to automated workflows. The meetings and the panting will become fewer and father between.

I don’t have data about how many worker bees power the Wall Street type outfits. I showed up, delivered information when queried, departed, and sent a bill for my time and travel. The financial hive and its quietly buzzing drones plugged away 10 or more hours a day, mostly six days a week.

The PublicView.ai FAQ page answers some basic questions; for example, “Can I perform quantitative analysis on the files?” The answer is:

Yes, you can ask Publicview to perform computations on the files using Python code. It can create graphs, charts, tables and more.

This is good news for the newly minted MBAs with programming skills. The bad news is that repeatable questions can be converted to workflows.

Let’s assume this product is good enough. There will be no overnight change in the work for existing employees. But slowly the senior managers will get the bright idea of hiring MBAs with different skills, possibly on a  contract basis. Then the work will begin to shift to software. At some point in the not-to-distant future, jobs for humans will be eliminated.

The question is, “How quickly can new hires make themselves into higher value employees in what are the early days of smart software?”

I suggest getting on a fast horse and galloping forward. Donkeys with Excel will fall behind. Software does not require health care, ever increasing inducements, and vacations. What’s interesting is that at some point many “analyst” jobs, not just in finance, will be handled by “good enough” smart software.

Remember a 51 percent win rate from code that does not hang out with a latte will strike some in carpetland as a no brainer. The good news is that MBAs don’t have a graduate degree in 18th century buttons or the Brutalist movement in architecture.

Stephen E Arnold, March 6, 2025

Big Thoughts On How AI Will Affect The Job Market

March 4, 2025

Every time there is an advancement in technology, humans are fearful they won’t make an income. While some jobs disappeared, others emerged and humans adapted to the changes. We’ll continue to adapt as AI becomes more integral in society. How will we handle the changes?

Anthropic, a big player in the OpenAI field, launched The Anthropic Index to understand AI’s effects on labor markers and the economy. Anthropic claims it’s gathering “first-of-its” kind data from Claude.ai anonymized conversations. This data demonstrates how AI is incorporated into the economy. The organization is also building an open source dataset for researchers to use and build on their findings. Anthropic surmises that this data will help develop policy on employment and productivity.

Anthropic reported on their findings in their first paper:

• “Today, usage is concentrated in software development and technical writing tasks. Over one-third of occupations (roughly 36%) see AI use in at least a quarter of their associated tasks, while approximately 4% of occupations use it across three-quarters of their associated tasks.

• AI use leans more toward augmentation (57%), where AI collaborates with and enhances human capabilities, compared to automation (43%), where AI directly performs tasks.

• AI use is more prevalent for tasks associated with mid-to-high wage occupations like computer programmers and data scientists, but is lower for both the lowest- and highest-paid roles. This likely reflects both the limits of current AI capabilities, as well as practical barriers to using the technology.”

The Register put the Anthropic report in layman’s terms in the article, “Only 4 Percent Of Jobs Rely Heavily On AI, With Peak Use In Mid-Wage Roles.” They share that only 4% of jobs rely heavily on AI for their work. These jobs use AI for 75% of their tasks. Overall only 36% of jobs use AI for 25% of their tasks. Most of these jobs are in software engineering, media industries, and educational/library fields. Physical jobs use AI less. Anthropic also found that 57% of these jobs use AI to augment human tasks and 43% automates them.

These numbers make sense based on AI’s advancements and limitations. It’s also common sense that mid-tier wage roles will be affected and not physical or highly skilled labor. The top tier will surf on money; the water molecules are not so lucky.

Whitney Grace, March 4, 2025

AI Research Tool from Perplexity Is Priced to Undercut the Competition

February 26, 2025

Are prices for AI-generated research too darn high? One firm thinks so. In a Temu-type bid to take over the market, reports VentureBeat, "Perplexity Just Made AI Research Crazy Cheap—What that Means for the Industry." CEO Aravind Srinivas credits open source software for making the move possible, opining that "knowledge should be universally accessible." Knowledge, yes. AI research? We are not so sure. Nevertheless, here we are. The write-up describes the difference in pricing:

"While Anthropic and OpenAI charge thousands monthly for their services, Perplexity offers five free queries daily to all users. Pro subscribers pay $20 monthly for 500 daily queries and faster processing — a price point that could force larger AI companies to explain why their services cost up to 100 times more."

Not only is Perplexity’s Deep Research cheaper than the competition, crows the post, its accuracy rivals theirs. We are told:

"[Deep Research] scored 93.9% accuracy on the SimpleQA benchmark and reached 20.5% on Humanity’s Last Exam, outperforming Google’s Gemini Thinking and other leading models. OpenAI’s Deep Research still leads with 26.6% on the same exam, but OpenAI charges $200 percent for that service. Perplexity’s ability to deliver near-enterprise level performance at consumer prices raises important questions about the AI industry’s pricing structure."

Well, okay. Not to stray too far from the point, but is a 20.5% or a 26.6% on Humanity’s Last Exam really something to brag about? Last we checked, those were failing grades. By far. Isn’t it a bit too soon to be outsourcing research to any LLM? But I digress.

We are told the low, low cost Deep Research is bringing AI to the micro-budget masses. And, soon, to the Windows-less—Perplexity is working on versions for iOS, Android, and Mac. Will this spell disaster for the competition?

Cynthia Murrell, February 26, 2025

Rest Easy. AI Will Not Kill STEM Jobs

February 25, 2025

dino orangeWritten by a dinobaby, not smart software. But I would replace myself with AI if I could.

Bob Hope quipped, “A sense of humor is good for you. Have you ever heard of a laughing hyena with heart burn?” No, Bob, I have not.

Here’s a more modern joke for you from the US Bureau of Labor Statistics circa 2025. It is much fresher than Mr. Hope’s quip from a half century ago.

The Bureau of Labor Statistics says:

Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. (Source: Investopedia)

Okay, I wonder what those LinkedIn, XTwitter, and Reddit posts about technology workers not being able to find jobs in these situations:

  1. Recent college graduates with computer science degrees
  2. Recently terminated US government workers from agencies like 18F
  3. Workers over 55 urged to take early retirement?

The item about the rosy job market appeared in Slashdot too. Here’s the quote I noted:

Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.

Robert Half, an employment firm, is equally optimistic. Just a couple of weeks ago, that outfit said:

Companies continue facing strong competition from other firms for tech talent, particularly for candidates with specialized skills. Across industries, AI proficiency tops the list of most-sought capabilities, with organizations needing expertise for everything from chatbots to predictive maintenance systems. Other in-demand skill areas include data science, IT operations and support, cybersecurity and privacy, and technology process automation.

What am I to conclude from these US government data? Here are my preliminary thoughts:

  1. The big time consulting firms are unlikely to change their methods of cost reduction; that is, if software (smart or dumb) can do a job for less money, that software will be included on a list of options. Given a choice of going out of business or embracing smart software, a significant percentage of consulting firm clients will give AI a whirl. If AI works and the company stays in business or grows, the humans will be repurposed or allowed to find their future elsewhere.
  2. The top one percent in any discipline will find work. The other 99 percent will need to have family connections, family wealth, or a family business to provide a boost for a great job. What if a person is not in the top one percent of something? Yeah, well, that’s not good for quite a few people.
  3. The permitted dominance of duopolies or oligopolies in most US business sectors means that some small and mid-sized businesses will have to find ways to generate revenue. My experience in rural Kentucky is that local accounting, legal, and technology companies are experimenting with smart software to boost productivity (the MBA word for cheaper work functions). Local employment options are dwindling because the smaller employers cannot stay in business. Potential employees want more pay than the company can afford. Result? Downward spiral which appears to be accelerating.

Am I confident in statistics related to wages, employment, and the growth of new businesses and industrial sectors? No, I am not. Statistical projects work pretty well in nuclear fuel management. Nested mathematical procedures in smart software work pretty well for some applications. Using smart software to reduce operating costs work pretty well right now.

Net net: Without meaningful work, some of life’s challenges will spark unanticipated outcomes. Exactly what type of stress breaks a social construct? Those in the job hunt will provide numerous test cases, and someone will do an analysis. Will it be correct? Sure, close enough for horseshoes.

Stop complaining. Just laugh as Mr. Hope noted. No heartburn and cost savings too boot.

Stephen E Arnold, February 25, 2025

Are These Googlers Flailing? (Yes, the Word Has “AI” in It Too)

February 12, 2025

Is the Byte write up on the money? I don’t know, but I enjoyed it. Navigate to “Google’s Finances Are in Chaos As the Company Flails at Unpopular AI. Is the Momentum of AI Starting to Wane?” I am not sure that AI is in its waning moment. Deepseek has ignited a fire under some outfits. But I am not going to critic the write up. I want to highlight some of its interesting information. Let’s go, as Anatoly the gym Meister says, just with an Eastern European accent.

Here’s the first statement in the article which caught my attention:

Google’s parent company Alphabet failed to hit sales targets, falling a 0.1 percent short of Wall Street’s revenue expectations — a fraction of a point that’s seen the company’s stock slide almost eight percent today, in its worst performance since October 2023. It’s also a sign of the times: as the New York Times reports, the whiff was due to slower-than-expected growth of its cloud-computing division, which delivers its AI tools to other businesses.

Okay, 0.1 percent is something, but I would have preferred the metaphor of the “flail” word to have been used in the paragraph begs for “flog,” “thrash,” and “whip.”

image

I used Sam AI-Man’s AI software to produce a good enough image of Googlers flailing. Frankly I don’t think Sam AI-Man’s system understands exactly what I wanted, but close enough for horseshoes in today’s world.

I noted this information and circled it. I love Gouda cheese. How can Google screw up cheese after its misstep with glue and cheese on pizza. Yo, Googlers. Check the cheese references.

Is Alphabet’s latest earnings result the canary in the coal mine? Should the AI industry brace for tougher days ahead as investors become increasingly skeptical of what the tech has to offer? Or are investors concerned over OpenAI’s ChatGPT overtaking Google’s search engine? Illustrating the drama, this week Google appears to have retroactively edited the YouTube video of a Super Bowl ad for its core AI model called Gemini, to remove an extremely obvious error the AI made about the popularity of gouda cheese.

Stalin revised history books. Google changes cheese references for its own advertising. But cheese?

The write up concludes with this, mostly from American high technology watching Guardian newspaper in the UK:

Although it’s still well insulated, Google’s advantages in search hinge on its ubiquity and entrenched consumer behavior,” Emarketer senior analyst Evelyn Mitchell-Wolf told The Guardian. This year “could be the year those advantages meaningfully erode as antitrust enforcement and open-source AI models change the game,” she added. “And Cloud’s disappointing results suggest that AI-powered momentum might be beginning to wane just as Google’s closed model strategy is called into question by Deepseek.”

Does this constitute the use of the word “flail”? Sure, but I like “thrash” a lot. And “wane” is good.

Stephen E Arnold, February 12, 2025

Deepseek: Details Surface Amid Soft Numbers

February 7, 2025

dino orange_thumb_thumb_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

I read “Research exposes Deepseek’s AI Training Cost Is Not $6M, It’s a Staggering $1.3B.” The assertions in the write up are interesting and closer to the actual cost of the Deepseek open source smart software. Let’s take a look at the allegedly accurate and verifiable information. Then I want to point out two costs not included in the estimated cost of Deepseek.

The article explains that the analysis for training was closer to $1.3 billion. I am not sure if this estimate is on the money, but a higher cost is certainly understandable based on the money burning activities of outfits like Microsoft, OpenAI, Facebook / Meta, and the Google, among others.

The article says:

In its latest report, SemiAnalysis, an independent research company, has spotlighted Deepseek, a rising player in the AI landscape. The SemiAnalysis challenges some of the prevailing narratives surrounding Deepseek’s costs and compares them to competing technologies in the market. One of the most prominent claims in circulation is that Deepseek V3 incurs a training cost of around $6 million.

One important point is that building and making available for free a smart software system incurs many costs. The consulting firm has narrowed its focus to training costs.

The write up reports:

The $6 million estimate primarily considers GPU pre-training expenses, neglecting the significant investments in research and development, infrastructure, and other essential costs accruing to the company. The report highlights that Deepseek’s total server capital expenditure (CapEx) amounts to an astonishing $1.3 billion. Much of this financial commitment is directed toward operating and maintaining its extensive GPU clusters, the backbone of its computational power.

But “astonishing.” Nope. Sam AI-Man tossed around numbers in the trillions. I am not sure we will ever know how much Amazon, Facebook, Google, and Microsoft — to name four outfits — have spent in the push to win the AI war, get a new monopoly, and control everything from baby cams to zebra protection in South Africa.

I do agree that the low ball number was low, but I think the pitch for this low ball was a tactic designed to see what a Chinese-backed AI product could do to the US financial markets.

There are some costs that neither the SemiAnalytics outfit or the Interesting Engineering wordsmith considered.

First, if you take a look at the authors of the Deepseek ArXiv papers you will see a lot of names. Most of these individuals are affiliated with Chinese universities. How we these costs handled? My hunch is that the costs were paid by the Chinese government and the authors of the paper did what was necessary to figure out how to come up with a “do more for less” system. The idea is that China, hampered by US export restrictions, is better at AI than the mythological Silicon Valley. Okay, that’s a good intelligence operation: Test destabilization with a reasonably believable free software gilded with AI sparklies. But the costs? Staff, overhead, and whatever perks go with being a wizard at a Chinese university have to be counted, multiplied by the time required to get the system to work mostly, and then included in the statement of accounts. These steps have not been taken, but a company named Complete Analytics should do the work.

Second, what was the cost of the social media campaign that made Deepseek more visible than the head referee of the Kansas City Chiefs and Philadelphia Eagle game? That cost has not been considered. Someone should grind through the posts, count the authors or their handles, and produce an estimate. As far as I know, there is no information about who is a paid promoter of Deepseek.

Third, how much did the electricity to get DeepSeek to do its tricks? We must not forget the power at the universities, the research labs, and the laptops. Technology Review has some thoughts along this power line.

Finally, what’s the cost of the overhead. I am thinking about the planning time, the lunches, the meetings, and the back and forth needed to get Deepseek on track to coincide with the new president’s push to make China not so great again? We have nothing. We need a firm called SpeculativeAnalytics for this task or maybe MasterCard can lend a hand?

Net net: The Deepseek operation worked. The recriminations, the allegations, and the explanations will begin. I am not sure they will have as much impact as this China smart, US dumb strategy. Plus, that SemiAnalytics’ name is a hoot.

Stephen E Arnold, February 7, 2025

Online Generates Fans and Only Fans

February 6, 2025

Ah, the World Wide Web—virtual land of opportunity! For example, as Canada’s CBC reports, "Olympians Are Turning to OnlyFans to Fund Dreams as they Face a ‘Broken’ Finance System." Because paying athletes to compete tarnishes the Olympic ideal, obviously. Never mind the big bucks raked in by the Olympic Committee. It’s the principle of the thing. We learn:

"Dire financial straits are leading droves of Olympic athletes to sell images of their bodies to subscribers on OnlyFans — known for sexually explicit content — to sustain their dreams of gold at the Games. As they struggle to make ends meet, a spotlight is being cast on an Olympics funding system that watchdog groups condemn as ‘broken,’ claiming most athletes ‘can barely pay their rent.’ The Olympics, the world’s biggest sporting stage, bring in billions of dollars in TV rights, ticket sales and sponsorship, but most athletes must fend for themselves financially."

But wait, what about those Olympians like Michael Phelps and Simone Biles who make millions? Success stories like theirs are few. The article shares anecdotes of athletes who have taken the Only Fans route. They are now able to pay their bills, including thousands of dollars in expenses like coaching, physical therapy, and equipment. However, in doing so they face social stigma. None are doing this because they want to, opines Mexican diver Diego Balleza Isaias, but because they have to.

Why are the world’s top athletes selling (images of) their finely honed bodies to pay the bills? The write-up cites comments from the director of Global Athlete, an athlete-founded organization addressing the power imbalance in sports:

"’The entire funding model for Olympic sport is broken. The IOC generates now over $1.7 billion US per year and they refuse to pay athletes who attend the Olympics,’ said Rob Koehler, Global Athlete’s director general. He criticized the IOC for forcing athletes to sign away their image rights. ‘The majority of athletes can barely pay their rent, yet the IOC, national Olympic committees and national federations that oversee the sport have employees making over six figures. They all are making money off the backs of athletes."

Will this trend prompt the Olympic Committee to change its ways? Or will it just make a rule against the practice and try to sweep this whole chapter under the mat? The corroding Olympic medals complement this story too.

Cynthia Murrell, February 6, 2025

eGames Were Supposed to Spin Cash Forever

February 5, 2025

Videogames are still a young medium, but they’re over fifty years old. The gaming industry has seen ups and downs with the first (and still legendary) being the 1983 crash. Arcade games were all the rage back then, but these days consoles and computers have the action. At least, they should.

Wired writes that “2024 Was The Year The Bottom Fell Out Of The Games Industry” due to multiple reasons. There was massive layoffs in 2023 with over 10,000 game developers losing their jobs. Some of this was attributed to AI slowly replacing developers. The gaming industry’s job loss in 2024 was forty percent higher than the prior year. Yikes!

DEI (diversity, equity, and inclusion) combined with woke mantra was also blamed for the failure of many games, including Suicide Squad: Kill the Justice League. The phrase “go woke, go broke” echoed throughout the industry as it is in Hollywood, Silicon Valley, and other fields.

According to Matthew Ball, an adviser and producer in the games and TV space…says that the blame for all of this can’t be pinned to a single thing, like capitalism, mismanagement, Covid-19, or even interest rates. It also involves development costs, how studios are staffed, consumers’ spending habits, and game pricing. “This storm is so brutal,” he says, ‘because it is all of these things at once, and none have really alleviated since the layoffs began.’”

Many indie studios were shuttered and large tech leaders such as Microsoft and Sony shut down parts of their gaming division. Also a chain of events influenced by the hatred of DEI and its associated mindsets that is being called a second GamerGate.

The gaming industry will continue through the beginnings of 2025 with business as usual. The industry will bounce back, but it will be different than the past.

Whitney Grace, February 5, 2025

Google and Job Security? What a Hoot

February 4, 2025

dino orange_thumb_thumbWe have smart software, but the dinobaby continues to do what 80 year olds do: Write the old-fashioned human way. We did give up clay tablets for a quill pen. Works okay.

Yesterday (January 30, 2025), one of the group mentioned that Google employees were circulating a YAP. I was not familiar with the word “yap”, so I asked, “What’s a yap?” The answer: It is yet another petition.

Here’s what I learned and then verified by a source no less pristine than NBC news. About a 1,000 employees want Google to assure the workers that they have “job security.” Yo, Googlers, when lawyers at the Department of Justice and other Federal workers lose their jobs between sips of their really lousy DoJ coffee, there is not much job security. Imagine professionals with sinecures now forced to offer some version of reality on LinkedIn. Get real.

The “real” news outfit reported:

Google employees have begun a petition for “job security” as they expect more layoffs by the company. The petition calls on Google CEO Sundar Pichai to offer buyouts before conducting layoffs and to guarantee severance to employees that do get laid off. The petition comes after new CFO Anat Ashkenazi said one of her top priorities would be to drive more cost cutting as Google expands its spending on artificial intelligence infrastructure in 2025.

I remember when Googlers talked about the rigorous screening process required to get a job. This was the unicorn like Google Labs Aptitude Test or GLAT. At one point, years ago, someone in the know gave me before a meeting the “test.” Here’s the first page of the document. (I think I received this from a Googler in 2004 or 2005 five:

image

If you can’t read this, here’s question 6:

One your first day at Google, you discover that your cubicle mate wrote the textbook you used as a primary resource in your first year of graduate school. Do you:

a) Fawn obsequiously and ask if you can have an aut0ograph

b) Sit perfectly still and use only soft keystrokes to avoid disturbing her concentration

c) Leave her daily offerings of granola and English toffee from the food bins

d) Quote your favorite formula from the text book and explain how it’s now your mantra

e) Show her how example 17b could have been solved with 34 fewer lines of code?

I have the full GLAT if you want to see it. Just write benkent2020 at yahoo dot com and we will find a way to provide the allegedly real document to you.

The good old days of Googley fun and self confidence are, it seems, gone. As a proxy for the old Google one has employees we have words like this:

“We, the undersigned Google workers from offices across the US and Canada, are concerned about instability at Google that impacts our ability to do high quality, impactful work,” the petition says. “Ongoing rounds of layoffs make us feel insecure about our jobs. The company is clearly in a strong financial position, making the loss of so many valuable colleagues without explanation hurt even more.”

I would suggest that the petition won’t change Google’s RIF. The company faces several challenges. One of the major ones is the near impossibility of paying for [a] indexing and updating the wonderful Google index, [b] spending money in order to beat the pants off the outfits which used Google’s transformer tricks, and [c] buy, hire, or coerce the really big time AI wizards to join the online advertising company instead of starting an outfit to create a wrapper for Deepseek and getting money from whoever will offer it.

Sorry, petitions are unlikely to move a former McKinsey big time blue chip consultant. Get real, Googler. By the way, you will soon be a proud Xoogler. Enjoy that distinction.

Stephen E Arnold, February 4, 2025

Dumb Smart Software? This Is News?

January 31, 2025

dino orange_thumbA blog post written by a real and still-alive dinobaby. If there is art, there is AI in my workflow.

The prescient “real” journalists at the Guardian have a new insight: When algorithms are involved, humans get the old shaftola. I assume that Weapons of Math Destruction was not on some folks’ reading list. (O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016). That book did a reasonably good job of explaining how smart software’s math can create some excitement for mere humans. Anecdotes about Amazon’s management of its team of hard-working delivery professionals shifting into survival tricks revealed by the wily Dane creating Survival Russia videos for YouTube.

(Yep, he took his kids to search for graves near a gulag.) “It’s a Nightmare: Couriers Mystified by the Algorithms That Control Their Jobs” explains that smart software raises some questions. The “real” journalist explains:

This week gig workers, trade unions and human rights groups launched a campaign for greater openness from Uber Eats, Just Eat and Deliveroo about the logic underpinning opaque algorithms that determine what work they do and what they are paid. The couriers wonder why someone who has only just logged on gets a gig while others waiting longer are overlooked. Why, when the restaurant is busy and crying out for couriers, does the app say there are none available?

Confusing? To some but to the senior managers of the organizations shifting to smart software, the cost savings are a big deal. Imagine. In Britain, a senior manager can spend a week or two in Nice, maybe Monaco? The write up reports:

The app companies say they do have rider support staffed by people and some information about the algorithms is available on their websites and when drivers are initially “onboarded”.

Of course the “app companies” say positive things. The issue is that management embraces smart software. A third-party firm is retained to advise the lawyers and accountants and possibly one presentable information technology person to a briefing. The options are considered and another third-party firm is retained to integrate the smart software. That third-party retains a probably unpresentable IT person who can lash up some smart software to the bailing-wire-and-spit enterprise software system. Bingo! The algorithms perform their magic. Oh, whom does one blame for a flawed solution? I don’t know. Just call in the lawyers.

The article explains the impact on a worker who delivers for people who cannot walk to a restaurant or the grocery:

“Every worker should understand the basis on which they are paid,” Farrar [a delivery professional] said. “But you’re being gamed into deciding whether to accept a job or not. Will I get a better offer? It’s like gambling and it’s very distressing and stressful for people. You are completely in a vacuum about how best to do the job and because people often don’t understand how decisions are being made about their work, it encourages conspiracies.”

To whom should Mr. Farrar and others shafted by math complain? Perhaps the Guardian newspaper, which is slightly less popular than TikTok or X.com, Facebook or Red Book, or BlueSky or YouTube. My suggestion would be for the Guardian to use these channels and beg for pounds or dollars like other valiant social media professionals. The person doing deliveries might want to explore working for Amazon deliveries and avail himself of Survival Russia videos when on his generous Amazon breaks. And what about the people who call a restaurant and specify at home delivery? I would recommend getting out of that comfy lounge chair and walking to the restaurant in person. While you wait for your lovingly-crafted meal at the Indian takeaway, you can read Weapons of Math Destruction.

Stephen E Arnold, January 31, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta