Forget Surveillance Capitalism. Think Parasite Culture
October 15, 2024
Ted Gioia touts himself as The Honest Broker on his blog and he recently posted about the current state of the economy: “Are We Now Living In A Parasite Culture?” In the opening he provides examples of natural parasites before moving to his experience working with parasite strategies.
Gioia said that when he consulted fortune 500 companies, he and others used parasite strategies as thought exercises. Here’s what a parasite strategy is:
1. “You allow (or convince) someone else to make big investments in developing a market—so they cover the cost of innovation, or advertising, or lobbying the government, or setting up distribution, or educating customers, or whatever. But…
2. You invest your energy instead on some way of cutting off these dutiful folks at the last moment—at the point of sale, for example. Hence…
3. You reap the benefits of an opportunity that you did nothing to create.”
On first reading, it doesn’t seem that our economy is like that until he provides true examples: Facebook, Spotify, TikTok, and Google. All of these platforms are nothing more than a central location for people to post and share their content or they aggregate content from the Internet. These platforms thrive off the creativity of their users and their executive boards reap the benefits, while the creators struggle to rub two cents together.
Smart influencers know to diversify their income streams through sponsorship, branding, merchandise, and more. Gioia points out that the Forbes lists of billionaires includes people who used parasitical business strategies to get rich. He continues by saying that these parasites will continue to guzzle off their hosts’ lifeblood with a chance of killing said host.
Its happening now in the creative economy with Big Tech’s investment in AI and how, despite lawsuits and laws, these companies are illegally training AI on creative pursuits. He finishes with the obvious statement that politicians should be protecting people, but that they’re probably part of the problem. No duh.
Whitney Grace, October 15, 2024
Microsoft Security: A World First
September 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
After the somewhat critical comments of the chief information security officer for the US, Microsoft said it would do better security. “Secure Future Initiative” is a 25 page document which contains some interesting comments. Let’s look at a handful.
Some bad actors just go where the pickings are the easiest. Thanks, MSFT Copilot. Good enough.
On page 2 I noted the record beating Microsoft has completed:
Our engineering teams quickly dedicated the equivalent of 34,000 full-time engineers to address the highest priority security tasks—the largest cybersecurity engineering project in history.
Microsoft is a large software company. It has large security issues. Therefore, the company undertaken the “largest cyber security engineering project in history.” That’s great for the Guinness Book of World Records. The question is, “Why?” The answer, it seems to me, is, “Microsoft did “good enough” security. As the US government’s report stated, “Nope. Not good enough.” Hence, a big and expensive series of changes. Have the changes been tested or have unexpected security issues been introduced to the sprawl of Microsoft software? Another question from this dinobaby: “Can a big company doing good enough security implement fixes to remediate “the highest priority security tasks”? Companies have difficulty changing certain work practices. Can “good enough” methods do the job?
On page 3:
Security added as a core priority for all employees, measured against all performance reviews. Microsoft’s senior leadership team’s compensation is now tied to security performance
Compensation is lined to security as a “core priority.” I am not sure what making something a “core priority” means, particularly when the organization has implement security systems and methods which have been found wanting. When the US government gives a bad report card, one forms an impression of a fairly deep hole which needs to be filled with functional, reliable bits. Adding a “core priority” does not correlate with security software from cloud to desktop.
On page 5:
To enhance governance, we have established a new Cybersecurity Governance Council…
The creation of a council and adding security responsibilities to some executives and hiring a few other means to me:
- Meetings and delays
- Adding duties may translate to other issues
- How much will these remediating processes cost?
Microsoft may be too big to change its culture in a timely manner. The time required for a council to enhance governance means fixing security problems may take time. Even with additional time and “the equivalent of 34,000 full time engineers” may be a project management task of more than modest proportions.
On page 7:
Secure by design
Quite a subhead. How can Microsoft’s sweep of legacy and now products be made secure by design when these products have been shown to be insecure.
On page 10:
Our strategy for delivering enduring compliance with the standard is to identify how we will Start Right, Stay Right, and Get Right for each standard, which are then driven programmatically through dashboard driven reviews.
The alliteration is notable. However, what is “right”? What happens when fixing up existing issues and adhering to a “standard” find that a “standard” has changed. The complexity of management and the process of getting something “right” is like an example from a book from a Santa Fe Institute complexity book. The reality of addressing known security issues and conforming to standards which may change is interesting to contemplate. Words are great but remediating what’s wrong in a dynamic and very complicated series of dependent services is likely to be a challenge. Bad actors will quickly probe for new issues. Generally speaking, bad actors find faults and exploit them. Thus, Microsoft will find itself in a troublesome mode: Permanent reactions to previously unknown and new security issues.
On page 11, the security manifesto launches into “pillars.” I think the idea is that good security is built upon strong foundations. But when remediating “as is” code as well as legacy code, how long will the design, engineering, and construction of the pillars take? Months, years, decades, or multiple decades. The US CISO report card may not apply to certain time scales; for instance, big government contracts. Pillars are ideas.
Let’s look at one:
The monitor and detect threats pillar focuses on ensuring that all assets within Microsoft production infrastructure and services are emitting security logs in a standardized format that are accessible from a centralized data system for both effective threat hunting/investigation and monitoring purposes. This pillar also emphasizes the development of robust detection capabilities and processes to rapidly identify and respond to any anomalous access, behavior, and configuration.
The reality of today’s world is that security issues can arise from insiders. Outside threats seem to be identified each week. However, different cyber security firms identify and analyze different security issues. No one cyber security company is delivering 100 percent foolproof threat identification. “Logs” are great; however, Microsoft used to charge for making a logging function available to a customer. Now more logs. The problem is that logs help identify a breach; that is, a previously unknown vulnerability is exploited or an old vulnerability makes its way into a Microsoft system by a user action. How can a company which has a poor report card issued by the US government become the firm with a threat detection system which is the equivalent of products now available from established vendors. The recent CrowdStrike misstep illustrates that the Microsoft culture created the opportunity for the procedural mistake someone made at Crowdstrike. The words are nice, but I am not that confident in Microsoft’s ability to build this pillar. Microsoft may have to punt and buy several competitive systems and deploy them like mercenaries to protect the unmotivated Roman citizens in a century.
I think reading the “Secure Future Initiative” is a useful exercise. Manifestos can add juice to a mission. However, can the troops deliver a victory over the bad actors who swarm to Microsoft systems and services because good enough is like a fried chicken leg to a colony of ants.
Stephen E Arnold, September 30, 2024
AI Automation Has a Benefit … for Some
September 26, 2024
Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”
Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.
Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:
“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.
He said it was “too simplistic” to simply say new jobs would be created in the future.
‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”
The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.
Will that really happen? Maybe?
Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.
Whitney Grace, September 26, 2024
Happy AI News: Job Losses? Nope, Not a Thing
September 19, 2024
This essay is the work of a dumb humanoid. No smart software required.
I read “AI May Not Steal Many Jobs after All. It May Just Make Workers More Efficient.” Immediately two points jumped out at me. The AP (the publisher of the “real” news story is hedging with the weasel word “may” and the hedgy phrase “after all.” Why is this important? The “real” news industry is interested in smart software to reduce costs and generate more “real” news more quickly. The days with “real” reporters disappearing for hours to confirm with a source are often associated with fiddling around. The costs of doing anything without a gusher of money pumping 24×7 are daunting. The word “efficient” sits in the headline as a digital harridan stakeholder. Who wants that?
The manager of a global news operation reports that under his watch, he has achieved peak efficiency. Thanks, MSFT Copilot. Will this work for production software development? Good enough is the new benchmark, right?
The story itself strikes me as a bit of content marketing which says, “Hey, everyone can use AI to become more efficient.” The subtext is, “Hey, don’t worry. No software robot or agentic thingy will reduce staff. Probably.
The AP is a litigious outfit even though I worked at a newspaper which “participated” in the business process of the entity. Here’s one sentence from the “real” news write up:
Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.
Yep, just like the steam engine and the Internet.
When technologies emerge, most go away or become componentized or dematerialized. When one of those hot technologies fail to produce revenues, quite predictable outcomes result. Executives get fired. VC firms do fancy dancing. IRS professionals squint at tax returns.
So far AI has been a “big guys win sort of because they have bundles of cash” and “little outfits lose control of their costs”. Here’s my take:
- Human-generated news is expensive and if smart software can do a good enough job, that software will be deployed. The test will be real time. If the software fails, the company may sell itself, pivot, or run a garage sale.
- When “good enough” is the benchmark, staff will be replaced with smart software. Some of the whiz kids in AI like the buzzword “agentic.” Okay, agentic systems will replace humans with good enough smart software. That will happen. Excellence is not the goal. Money saving is.
- Over time, the ideas of the current transformer-based AI systems will be enriched by other numerical procedures and maybe— just maybe — some novel methods will provide “smart software” with more capabilities. Right now, most smart software just finds a path through already-known information. No output is new, just close to what the system’s math concludes is on point. Right now, the next generation of smart software seems to be in the future. How far? It’s anyone’s guess.
My hunch is that Amazon Audible will suggest that humans will not lose their jobs. However, the company is allegedly going to replace human voices with “audibles” generated by smart software. (For more about this displacement of humans, check out the Bloomberg story.)
Net net: The “real” news story prepares the field for planting writing software in an organization. It says, “Customer will benefit and produce more jobs.” Great assertions. I think AI will be disruptive and in unpredictable ways. Why not come out and say, “If the agentic software is good enough, we will fire people”? Answer: Being upfront is not something those who are not dinobabies do.
Stephen E Arnold, September 19, 2024
IT Departments Losing Support From Top Brass
September 19, 2024
Modern businesses can’t exist today without technological infrastructure. Organizations rely on the IT department. Without the Internet, computer, and other technology businesses come to a screeching halt. Despite the power IT departments wield, ZDNet says that, “Business Leaders Are Losing Faith In IT, According To This IBM Study. Here’s Why.” According to the survey, ten years ago business leaders believed that basic IT services were effect. Now it is only about half of what it used to be. Generative AI is also giving leaders the willies.
Business leaders are disgruntled with IT and they have high expectations over what technology shoulder deliver. Leaders want there technology to give their businesses a competitive edge. They’re also more technology competent than their predecessors, so the leaders want instantaneous fixes and results.
A big problem is that the leaders and tech departments aren’t communicating and collaborating. Generative AI is making both parties worry, because one doesn’t know what the other is doing concerning implementation and how to use it. It’s important for these groups to start talking, because AI and hybrid cloud services are expected to consume 50% more of infrastructure budgets.
The survey shared suggestions to improve confidence in IT services. Among the usual suggestions were hire more women who are IT or AI experts, make legacy systems AI ready by making infrastructure investments, use AI to build better AI, involve the workforce in how AI drives the business, and then these:
“Measure, measure, measure technology’s impact on business outcomes: Notably, among high-performing tech CxO respondents defined in the survey, the study found that organizations that connect technology investments to measurable business outcomes report 12% higher revenue growth.
Talk about outcomes, not about data: "Focus on shared objectives by finding a common language with the business based on enhancing the customer experience and delivering outcomes. Use storytelling and scenario-based exercises to drive tech and the business to a shared understanding of the customer journey and pain points."
It’s the usual information with an updated spin on investing in the future, diversifying the workforce, and listening to the needs of workers. It’s the same stuff in a new package.
Whitney Grace, September 19, 2024
Great Moments in Leadership: Drive an Uber
September 18, 2024
I was zipping through my newsfeed and spotted this item: “Ex-Sony Boss Tells Laid-Off Employees to Drive an Uber and Find a Cheap Place to Live.” In the article, the ex-Sony boss is quoted as allegedly saying:
I think it’s probably very painful for the managers, but I don’t think that having skill in this area is going to be a lifetime of poverty or limitation. It’s still where the action is, and it’s like the pandemic but now you’re going to have to take a few…figure out how to get through it, drive an Uber or whatever, go off to find a cheap place to live and go to the beach for a year.
I admit that I find the advice reasonably practical. However, it costs money to summon an Uber. The other titbit is that a person without a job should find a “cheap place to live.” Ah, ha, van life or moving in with a friend. Possibly one could become a homeless person dwelling near a beach. What if the terminated individual has a family? I suppose there are community food services.
From an employee’s point of view, this is “tough love” management. How effective is this approach? I have worked for a number of firms in my 50 plus year career prior to my retiring in 2013. I can honestly say that this Uber and move to a cheaper place to live is remarkable. It is novel. Possibly a breakthrough in management methods.
I look forward to a TED talk from this leader. When will the Harvard Business Review present a more in-depth look at the former Sony president’s ideas? Oh, right. “Former” is the operative word. Yep, former.
Stephen E Arnold, September 17, 2024
CrowdStrike: Whiffing Security As a Management Precept
September 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Not many cyber security outfits can make headlines like NSO Group. But no longer. A new buzz champion has crowned: CrowdStrike. I learned a bit more about the company’s commitment to rigorous engineering and exemplary security practices. “CrowdStrike Ex-Employees: Quality Control Was Not Part of Our Process.” NSO Group’s rise to stardom was propelled by its leadership and belief in the superiority of Israeli security-related engineering. CrowdStrike skipped that and perfected a type of software that could strand passengers, curtail surgeries, and force Microsoft to rethink its own wonky decisions about kernel access.
A trained falcon tells an onlooker to go away. The falcon, a stubborn bird, has fallen in love with a limestone gargoyle. Its executive function resists inputs. Thanks, MSFT Copilot. Good enough.
The write up says:
Software engineers at the cybersecurity firm CrowdStrike complained about rushed deadlines, excessive workloads, and increasing technical problems to higher-ups for more than a year before a catastrophic failure of its software paralyzed airlines and knocked banking and other services offline for hours.
Let’s assume this statement is semi-close to the truth pin on the cyber security golf course. In fact, the company insists that it did not cheat like a James Bond villain playing a round of golf. The article reports:
CrowdStrike disputed much of Semafor’s reporting and said the information came from “disgruntled former employees, some of whom were terminated for clear violations of company policy.” The company told Semafor: “CrowdStrike is committed to ensuring the resiliency of our products through rigorous testing and quality control, and categorically rejects any claim to the contrary.”
I think someone at CrowdStrike has channeled a mediocre law school graduate and a former PR professional from a mid-tier publicity firm in Manhattan, lower Manhattan, maybe in Alphabet City.
The article runs through a litany of short cuts. You can read the original article and sort them out.
The company’s flagship product is called “Falcon.” The idea is that the outstanding software can, like a falcon, spot its prey (a computer virus). Then it can solve trajectory calculations and snatch the careless gopher. One gets a plump Falcon and one gopher filling in for a burrito at a convenience store on the Information Superhighway.
The killer paragraph in the story, in my opinion, is:
Ex-employees cited increased workloads as one reason they didn’t improve upon old code. Several said they were given more work following staff reductions and reorganizations; CrowdStrike declined to comment on layoffs and said the company has “consistently grown its headcount year over year.” It added that R&D expenses increased from $371.3 million to $768.5 million from fiscal years 2022 to 2024, “the majority of which is attributable to increased headcount.”
I buy the complaining former employee argument. But the article cites a number of CloudStrikers who are taking their expertise and work ethic elsewhere. As a result, I think the fault is indeed a management problem.
What does one do with a bad Falcon? I would put a hood on the bird and let it scroll TikToks. Bewits and bells would alert me when one of these birds were getting close to me.
Stephen E Arnold, September 16, 2024
Brin Is Back and Working Every Day at Google: Will He Be Summoned to Appear and Testify?
September 11, 2024
This essay is the work of a dumb humanoid. No smart software required.
I read some “real” news in the article “Sergey Brin Says He’s Working on AI at Google Pretty Much Every Day.” The write up does not provide specifics of his employment agreement, but the headline say “every day.” Does this mean that those dragging the Google into court will add him to their witness list? I am not an attorney, but I would be interested in finding out about the mechanisms for the alleged monopolistic lock in in the Google advertising system. Oh, well. I am equally intrigued to know if Mr. Brin will wear his roller blades to big meetings as he did with Viacom’s Big Dog.
My question is, “Can Mr. Brin go home again?” As Thomas Wolfe noted in his novel You Can’t Go Home Again”:
Every corner of our home has a story to tell.
I wonder if those dragging Alphabet Google YouTube into court will want to dig into that “story”?
Now what does the “real” news report other than Mr. Brin’s working every day? These items jumped off my screen and into my dinobaby mind:
- AI has tremendous value to humanity. I am not sure what this means when VCs, users, and assorted poohbahs point out that AI is burning cash, not generating it.
- AI is big and fast moving. Okay, but since the Microsoft AI marketing play with OpenAI, the flurry of activity has not translated to rapid fire next big things. In fact, progress on consumer-facing AI services has stalled. Even Google is reluctant to glue pizza to a crust if you know what I mean.
- The algorithms are demanding more “compute.” I think this means power, CPUs, and data. But Google is buying carbon credits, you say. Yeah, those are useful for PR, not for providing what Mr. Brin seems to suggest are needed to do AI.
Several thoughts crossed my mind:
First, most of the algorithms for smart software were presented in patent document form by Banjo, a Softbank company that ran into some headwinds. But the algorithms and numerical recipes were known and explained in Banjo’s patent documents. The missing piece was Google’s “transformer” method, which the company released as open source. Well, so what? The reason that large language models are becoming the same old same old. The Big Dogs of AI are using the same plumbing. Not much is new other than the hyperbole, right?
Second, where does Mr. Brin fit into the Google leadership set up. I am not sure he is in the cast of the Sundar & Prabhakar Comedy Show. What happens when he makes a suggestion? Who “approves” something he puts “wood” behind? Does his presence deliver entropy or chaos? Does he exist on the boundary, working his magic as he did with the Clever technology developed at IBM Almaden?
Third, how quickly will his working “pretty much every day” move him onto witness lists? Perhaps he will be asked to contribute to EU, US House, and US Senate hearings? How will Google work out the lingo of one of the original Googlers and the current “leadership”? The answer is meetings, scripting, and practicing. Aren’t these the things that motivated Mr. Brin to leave the company to pursue other interests. Now he wants
To sum up, just when I thought Google had reached peak dysfunction, I was wrong again.
Stephen E Arnold, September 11, 2024
Is AI Taking Jobs? Of Course Not
September 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read an unusual story about smart software. “AI May Not Steal Many Jobs After All. It May Just Make Workers More Efficient” espouses the notion that workers will use smart software to do their jobs more efficiently. I have some issues with this these, but let’s look at a couple of the points in the “real” news write up.
Thanks, MSFT Copilot. When will the Copilot robot take over a company and subscribe to Office 365 for eternity and pay up front?
Here’s some good news for those who believe smart software will kill humanoids:
AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the Internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy.
I am not sure doomsayers will be convinced. Among the most interesting doomsayers are those who may be unemployable but looking for a hook to stand out from the crowd.
Here’s another key point in the write up:
The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.’’ The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways. They cited a study this year led by David Autor, a leading MIT economist: It concluded that 60% of the jobs Americans held in 2018 didn’t even exist in 1940, having been created by technologies that emerged only later.
I love positive statements which invoke the authority of MIT, an outfit which found Jeffrey Epstein just a wonderful source of inspiration and donations. As the US shifted from making to servicing, the beneficiaries are those who have quite specific skills for which demand exists.
And now a case study which is assuming “chestnut” status:
The Swedish furniture retailer IKEA, for example, introduced a customer-service chatbot in 2021 to handle simple inquiries. Instead of cutting jobs, IKEA retrained 8,500 customer-service workers to handle such tasks as advising customers on interior design and fielding complicated customer calls.
The point of the write up is that smart software is a friendly helper. That seems okay for the state of transformer-centric methods available today. For a moment, let’s consider another path. This is a hypothetical, of course, like the profits from existing AI investment fliers.
What happens when another, perhaps more capable approach to smart software becomes available? What if the economies from improving efficiency whet the appetite of bean counters for greater savings?
My view is that these reassurances of 2024 are likely to ring false when the next wave of innovation in smart software flows from innovators. I am glad I am a dinobaby because software can replicate most of what I have done for almost the entirety of my 60-plus year work career.
Stephen E Arnold, September 9, 2024
Uber Leadership May Have to Spend Money to Protect Drivers. Wow.
September 5, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Senior managers — now called “leadership” — care about their employees. I added a wonderful example about corporate employee well being and co-worker sensitivity when I read “Wells Fargo Employee Found Dead in Her Cubicle 4 Days After She Clocked in for Work.” One of my team asked me, “Will leadership at that firm check her hours of work so she is not overpaid for the day she died?” I replied, “You will make a wonderful corporate leader one day.” Another analyst asked, “Didn’t the cleaning crew notice?” I replied, “Not when they come once every two weeks.”
Thanks, MSFT Copilot. Good enough given your filters.
A similar approach to employee care popped up this morning. My newsreader displayed this headline: “Ninth Circuit Rules Uber Had Duty to Protect Washington Driver Murdered by Passengers.” The write up reported:
The estate of Uber driver Cherno Ceesay sued the rideshare company for negligence and wrongful death in 2021, arguing that Uber knew drivers were at risk of violent assault from passengers but neglected to install any basic safety measures, such as barriers between the front and back seats of Uber vehicles or dash cameras. They also claimed Uber failed to employ basic identity-verification technology to screen out the two customers who murdered Ceesay — Olivia Breanna-Lennon Bebic and Devin Kekoa Wade — even though they opened the Uber account using a fake name and unverified form of payment just minutes before calling for the ride.
Hold it right there. The reason behind the alleged “failure” may be the cost of barriers, dash cams, and identity verification technology. Uber is a Big Dog high technology company. Its software manages rides, maps, payments, and the outstanding Uber app. If you want to know where your driver is, text the professional. Want to know the percentage of requests matched to drivers from a specific geographic point, forget that, gentle reader. Request a ride and wait for a confirmation. Oh, what if a pick up is cancelled after a confirmation? Fire up Lyft, right?
The cost of providing “basic” safety for riders is what helps make old fashioned taxi rides slightly more “safe.” At one time, Uber was cheaper than a weirdly painted taxi with a snappy phone number like 666 6666 or 777 7777 painted on the side. Now that taxis have been stressed by Uber, the Uber rides have become more expensive. Thanks to surge pricing, Uber in some areas is more expensive than taxis and some black car services if one can find one.
Uber wants cash and profits. “Basic” safety may add the friction of additional costs for staff, software licenses, and tangibles like plastic barriers and dash cams. The write up explains by quoting the legalese of the court decision; to wit:
“Uber alone controlled the verification methods of drivers and riders, what information to make available to each respective party, and consistently represented to drivers that it took their safety into consideration Ceesay relied entirely on Uber to match him with riders, and he was not given any meaningful information about the rider other than their location,” the majority wrote.
Now what? I am no legal eagle. I think Uber “leadership” will have meetings. Appropriate consultants will be retained to provide action plan options. Then staff (possibly AI assisted) will figure out how to reduce the probability of a murder in or near an Uber contractor’s vehicle.
My hunch is that the process will take time. In the meantime, I wonder if the Uber app autofills the “tip” section and then intelligently closes out that specific ride? I am confident that universities offering business classes will incorporate one or both of these examples in a class about corporate “leadership” principles. Tip: The money matters. Period.
Stephen E Arnold, September 5, 2024