Skills You Can Skip: Someone Is Pushing What Seems to Be Craziness

October 4, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The Harvard ethics research scam has ended. The Stanford University president resigned over fake data late in 2023. A clump of students in an ethics class used smart software to write their first paper. Why not use smart software? Why not let AI or just dishonest professors make up data with the help of assorted tools like Excel and Photoshop? Yeah, why not?

image

A successful pundit and lecturer explains to his acolyte that learning to write is a waste of time. And what does the pundit lecture about? I think he was pitching his new book, which does not require that one learn to write. Logical? Absolutely. Thanks, MSFT Copilot. Good enough.

My answer to the question is: “Learning is fundamental.” No, I did not make that up, nor did I believe the information in “How AI Can Save You Time: Here Are 5 Skills You No Longer Need to Learn.” The write up has sources; it has quotes; and it has the type of information which is hard to believe assembled by humans who presumably have some education, maybe a college degree.

What are the five skills you no longer need to learn? Hang on:

  1. Writing
  2. Art design
  3. Data entry
  4. Data analysis
  5. Video editing.

The expert who generously shared his remarkable insights for the Euro News article is Bernard Marr, a futurist and internationally best-selling author. What did Mr. Marr author? He has written “Artificial Intelligence in Practice: How 50 Successful Companies Used Artificial Intelligence To Solve Problems,” “Key Performance Indicators For Dummies,” and “The Intelligence Revolution: Transforming Your Business With AI.”

One question: If writing is a skill one does not need to learn, why does Mr. Marr write books?

I wonder if Mr. Marr relies on AI to help him write his books. He seems prolific because Amazon reports that he has outputted more than a dozen, maybe more. But volume does not explain the tension between Mr. Marr’s “writing” (which may be outputting) versus the suggestion that one does not need to learn or develop the skill of writing.

The cited article quotes the prolific Mr. Marr as saying:

“People often get scared when you think about all the capabilities that AI now have. So what does it mean for my job as someone that writes, for example, will this mean that in the future tools like ChatGPT will write all our articles? And the answer is no. But what it will do is it will augment our jobs.”

Yep, Mr. Marr’s job is outputting. You don’t need to learn writing. Smart software will augment one’s job.

My conclusion is that the five identified areas are plucked from a listicle, either generated by a human or an AI system. Euro News was impressed with Mr. Marr’s laser-bright insight about smart software. Will I purchase and learn from Mr. Marr’s “Generative AI in Practice: 100+ Amazing Ways Generative Artificial Intelligence is Changing Business and Society.”

Nope.

Stephen E Arnold, October 4, 2024

SolarWinds Outputs Information: Does Anyone Other Than Microsoft and the US Government Remember?

October 3, 2024

I love these dribs and drops of information about security issues. From the maelstrom of emails, meeting notes, and SMS messages only glimpses of what’s going on when a security misstep takes place. That’s why the write up “SolarWinds Security Chief Calls for tighter Cyber Laws” is interesting to me. How many lawyer-type discussions were held before the Solar Winds’ professional spoke with a “real” news person from the somewhat odd orange newspaper. (The Financial Times used to give these things away in front of their building some years back. Yep, the orange newspaper caught some people’s eye in meetings which I attended.)

The subject of the interview was a person who is/was the chief information security officer at SolarWinds. He was on duty with the tiny misstep took place. I will leave it to you to determine whether the CrowdStrike misstep or the SolarWinds misstep was of more consequence. Neither affected me because I am a dinobaby in rural Kentucky running steam powered computers from my next generation office in a hollow.

image

A dinobaby is working on a blog post in rural Kentucky. This talented and attractive individual was not affected by either the SolarWinds or the CrowdStrike security misstep. A few others were not quite so fortunate. But, hey, who remembers or cares? Thanks, Microsoft Copilot. I look exactly like this. Or close enough.

Here are three statements from the article in the orange newspaper I noted:

First, I learned that:

… cyber regulations are still ‘in flux’ which ‘absolutely adds stress across the globe’ on  cyber chiefs.

I am delighted to learn that those working in cyber security experience stress. I wonder, however, what about the individuals and organizations who must think about the consequences of having their systems breached. These folks pay to be secure, I believe. When that security fails, will the affected individuals worry about the “stress” on those who were supposed to prevent a minor security misstep? I know I sure worry about these experts.

Second, how about this observation by the SolarWinds’ cyber security professional?

When you don’t have rules to follow, it’s very hard to follow them,” said Brown [the cyber security leader at SolarWinds]. “Very few security people would ever do something that wasn’t right, but you just have to tell us what’s right in order to do it,” he added.

Let’s think about this statement. To be a senior cyber security professional one has to be trained, have some cyber security certifications, and maybe some specialized in-service instruction at conferences or specific training events. Therefore, those who attend these events allegedly “learn” what rules to follow; for instance, make systems secure, conduct routine stress tests, have third party firms conduct security audits, validate the code, widgets, and APIs one uses, etc., etc. Is it realistic to assume that an elected official knows anything about security systems at a cyber security firm? As a dinobaby, my view is that these cyber wizards need to do their jobs and not wait for non-experts to give them “rules.” Make the systems secure via real work, not chatting at conferences or drinking coffee in a conference room.

And, finally, here’s another item I circled in the orange newspaper:

Brown this month joined the advisory board of Israeli crisis management firm Cytactic but said he was still committed to staying in his role at SolarWinds. “As far as the incident at SolarWinds: It happened on my watch. Was I ultimately responsible? Well, no, but it happened on my watch and I want to get it right,” he said.

Wasn’t Israel the country caught flat footed in October 2023? How does a company in Israel — presumably with staff familiar with the tools and technologies used to alert Israel of hostile actions — learn from another security professional caught flatfooted? I know this is an easily dismissed question, but for a dinobaby, doesn’t one want to learn from a person who gets things right? As I said, I am old fashioned, old, and working in a log cabin on a  steam powered computing device.

The reality is that egregious security breaches have taken place. The companies and their staff are responsible. Are there consequences? I am not so sure. That means the present “tell us the rules” attitude will persist. Factoid: Government regulations in the US are years behind what clever companies and their executives do. No gap closing, sorry.

Stephen E Arnold, October 3, 2024

Big Companies: Bad Guys

October 3, 2024

Just in time for the UN Summit, the International Trade Union Confederation is calling out large corporations. The Guardian reports, “Amazon, Tesla, and Meta Among World’s Top Companies Undermining Democracy—Report.” Writer Michael Sainato tells us:

“Some of the world’s largest companies have been accused of undermining democracy across the world by financially backing far-right political movements, funding and exacerbating the climate crisis, and violating trade union rights and human rights in a report published on Monday by the International Trade Union Confederation (ITUC). Amazon, Tesla, Meta, ExxonMobil, Blackstone, Vanguard and Glencore are the corporations included in the report. The companies’ lobbying arms are attempting to shape global policy at the United Nations Summit of the Future in New York City on 22 and 23 September.”

The write-up shares a few of the report’s key criticisms. It denounces Amazon, for example, for practices from union busting and low wages to sky-high carbon emissions and tax evasion. Tesla, the ITUC charges, commits human rights violations while its majority shareholder Elon Musk loudly rails against democracy itself. And, the report continues, not only has Meta severely amplified far-right propaganda and groups around the world, it actively lobbies against data privacy laws. See the write-up for more examples.

The article concludes by telling us a little about the International Trade Union Confederation:

“The ITUC includes labor group affiliates from 169 nations and territories around the world representing 191 million workers, including the AFL-CIO, the largest federation of labor unions in the US, and the Trades Union Congress in the UK. With 4 billion people around the world set to participate in elections in 2024, the federation is pushing for an international binding treaty being worked on by the Open-ended intergovernmental working group to hold transnational corporations accountable under international human rights laws.”

Holding transnational corporations accountable—is that even possible? We shall see.

Cynthia Murrell, October 3, 2024

Open Source Versus Commercial Software. The Result? A Hybrid Which Neither Parent May Love

September 30, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have been down the open source trail a couple of times. The journey was pretty crazy because open source was software created to fulfill a computer science class requirement, a way to provide some “here’s what I can do” vibes to a résumé when résumés were anchored to someone’s reality, and “play” that would get code adopted so those in the know could sell services like engineering support, customizing, optimizing, and “making it mostly work.” In this fruit cake, were licenses, VCs, lone-wolves, and a few people creating something useful for a “community” which might or might not “support” the effort. Flip enough open source rocks and one finds some fascinating beasts like IBM, Microsoft, and other giant technology outfits.

image

Some hybrids work; others do not. Thanks, MSFT Copilot, good enough.

Today I learned their is now a hybrid of open source and proprietary (commercial) software. According to the “Some Startups Are Going Fair Source to Avoid the Pitfalls of Open Source Licensing” states:

The fair source concept is designed to help companies align themselves with the “open” software development sphere, without encroaching into existing licensing landscapes, be that open source, open core, or source-available, and while avoiding any negative associations that exist with “proprietary.” However, fair source is also a response to the growing sense that open source isn’t working out commercially.

Okay. I think “not working out commercially” is “real news” speak for “You can’t make enough money to become a Silicon Type mogul.” The write up adds:

Businesses that have flown the open source flag have mostly retreated to protect their hard work, moving either from fully permissive to a more restrictive “copyleft” license, as the likes of Element did last year and Grafana before it, or ditched open source altogether as HashiCorp did with Terraform.

These are significant developments. What about companies which have built substantial businesses surfing on open source software and have not been giving back in equal measure to the “community”? My hunch is that many start ups use the open source card as a way to get some marketing wind in their tiny sails. Other outfits just cobble together a number of open source software components and assemble a new and revolutionary product. The savings come from the expense of developing an original solution and using open source software to build what becomes a proprietary system. The origins of some software is either ignored by some firms or lost in the haze of employee turnover. After all, who remembers? A number of intelware companies which off specialized services to government agencies incorporate some open source software and use their low profile or operational secrecy to mask what their often expensive products provide to a government entity.

The write up notes:

For now, the main recommended fair source license is the Functional Source License (FSL), which Sentry itself launched last year as a simpler alternative to BUSL. However, BUSL itself has also now been designated fair source, as has another new Sentry-created license called the Fair Core License (FCL), both of which are included to support the needs of different projects. Companies are welcome to submit their own license for consideration, though all fair source licenses should have three core stipulations: It [the code] should be publicly available to read; allow third parties to use, modify, and redistribute with “minimal restrictions“; and have a delayed open source publication (DOSP) stipulation, meaning it converts to a true open source license after a predefined period of time. With Sentry’s FSL license, that period is two years; for BUSL, the default period is four years. The concept of “delaying” publication of source code under a true open source license is a key defining element of a fair source license, separating it from other models such as open core. The DOSP protects a company’s commercial interests in the short term, before the code becomes fully open source.

My reaction is that lawyers will delight in litigating such notions as “minimal restrictions.” The cited article correctly in my opinion says:

Much is open to interpretation and can be “legally fuzzy.”

Is a revolution in software licensing underway?

Some hybrids live; others die.

Stephen E Arnold, September 30, 2024

Salesforce: AI Dreams

September 30, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Big Tech companies are heavily investing in AI technology, including Salesforce. Salesforce CEO Marc Benioff delivered a keynote about his company’s future and the end of an era as reported by Constellation Research: “Salesforce Dreamforce 2024: Takeaways On Agentic AI, Platform, End Of Copilot Era.” Benioff described the copilot era as “hit or miss” and he wants to focus on agentic AI powered by Salesforce.

Constellation Research analyst Doug Henschen said that Benioff made compelling case for Salesforce and Data Cloud being the platform that companies will use to build their AI agents. Salesforce already has metadata, data, app business logic knowledge, and more already programmed in it. While Dream Cloud has data integrated from third-party data clouds and ingested from external apps. Combining these components into one platform without DIY is a very appealing product.

Benioff and his team revamped Salesforce to be less a series of clouds that run independently and more of a bunch of clouds that work together in a native system. It means Salesforce will scale Agentforce across Marketing, Commerce, Sales, Revenue and Service Clouds as well as Tableau.

The new AI Salesforce wants to delete DIY says Benioff:

“‘ DIY means I’m just putting it all together on my own. But I don’t think you can DIY this. You want a single, professionally managed, secure, reliable, available platform. You want the ability to deploy this Agentforce capability across all of these people that are so important for your company. We all have struggled in the last two years with this vision of copilots and LLMs. Why are we doing that? We can move from chatbots to copilots to this new Agentforce world, and it’s going to know your business, plan, reason and take action on your behalf.

It’s about the Salesforce platform, and it’s about our core mantra at Salesforce, which is, you don’t want to DIY it. This is why we started this company.’”

Benioff has big plans for Salesforce and based off this Dreamforce keynote it will succeed. However, AI is still experimental. AI is smart but a human is still easier to work with. Salesforce should consider teaming AI with real people for the ultimate solution.

Whitney Grace, September 30, 2024

Stupidity: Under-Valued

September 27, 2024

We’re taught from a young age that being stupid is bad. The stupid kids don’t move onto higher grades and they’re ridiculed on the playground. We’re also fearful of showing our stupidity, which often goes hand and hand with ignorance. These cause embarrassment and fear, but Math For Love has a different perspective: “The Centrality Of Stupidity In Mathematics.”

Math For Love is a Web site dedicated to revolutionizing how math is taught. They have games, curriculum, and more demonstrating how beautiful and fun math is. Math is one of those subjects that makes a lot of people feel dumb, especially the higher levels. The Math For Love team referenced an essay by Martin A. Schwartz called, “The Importance Of Stupidity In Scientific Research.”

Schwartz is a microbiologist and professor at the University of Virginia. In his essay, he expounds on how modern academia makes people feel stupid.

The stupid feeling is one of inferiority. It’s problem. We’re made to believe that doctors, engineers, scientists, teachers, and other smart people never experienced any difficulty. Schwartz points out that students (and humanity) need to learn that research is extremely hard. No one starts out at the top. He also says that they need to be taught how to be productively stupid, i.e. if you don’t feel stupid then you’re not really trying.

Humans are meant to feel stupid, otherwise they wouldn’t investigate, explore, or experiment. There’s an entire era in western history about overcoming stupidity: the Enlightenment. Math For Love explains that stupidity relative for age and once a child grows they overcome certain stupidity levels aka ignorance. Kids gain the comprehension about an idea, then can apply it to life. It’s the literal meaning of the euphemism: once a mind has been stretched it can’t go back to its original size.

“I’ve come to believe that one of the best ways to address the centrality of stupidity is to take on two opposing efforts at once: you need to assure students that they are not stupid, while at the same time communicating that feeling like they are stupid is totally natural. The message isn’t that they shouldnt be feeling stupid – that denies their honest feeling to learning the subject. The message is that of course they’re feeling stupid… that’s how everyone has to feel in order to learn math!:

Add some warm feelings to the equation and subtract self-consciousness, multiply by practice, and divide by intelligence level. That will round out stupidity and make it productive.

Whitney Grace, September 27, 2024

AI Maybe Should Not Be Accurate, Correct, or Reliable?

September 26, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Okay, AI does not hallucinate. “AI” — whatever that means — does output incorrect, false, made up, and possibly problematic answers. The buzzword “hallucinate” was cooked up by experts in artificial intelligence who do whatever they can to avoid talking about probabilities, human biases migrated into algorithms, and fiddling with the knobs and dials in the computational wonderland of an AI system like Google’s, OpenAI’s, et al. Even the book Why Machines Learn: The Elegant Math Behind Modern AI ends up tangled in math and jargon which may befuddle readers who stopped taking math after high school algebra or who has never thought about Orthogonal matrices.

The Next Web’s “AI Doesn’t Hallucinate — Why Attributing Human Traits to Tech Is Users’ Biggest Pitfall” is an interesting write up. On one hand, it probably captures the attitude of those who just love that AI goodness by blaming humans for anthropomorphizing smart software. On the other hand, the AI systems with which I have interacted output content that is wrong or wonky. I admit that I ask the systems to which I have access for information on topics about which I have some knowledge. Keep in mind that I am an 80 year old dinobaby, and I view “knowledge” as something that comes from bright people working of projects, reading relevant books and articles, and conference presentations or meeting with subjects far from the best exercise leggings or how to get a Web page to the top of a Google results list.

Let’s look at two of the points in the article which caught my attention.

First, consider this passage which is a quote from and AI expert:

“Luckily, it’s not a very widespread problem. It only happens between 2% to maybe 10% of the time at the high end. But still, it can be very dangerous in a business environment. Imagine asking an AI system to diagnose a patient or land an aeroplane,” says Amr Awadallah, an AI expert who’s set to give a talk at VDS2024 on How Gen-AI is Transforming Business & Avoiding the Pitfalls.

Where does the 2 percent to 10 percent number come from? What methods were used to determine that content was off the mark? What was the sample size? Has bad output been tracked longitudinally for the tested systems? Ah, so many questions and zero answers. My take is that the jargon “hallucination” is coming back to bite AI experts on the ankle.

Second, what’s the fix? Not surprisingly, the way out of the problem is to rename “hallucination” to “confabulation”. That’s helpful. Here’s the passage I circled:

“It’s really attributing more to the AI than it is. It’s not thinking in the same way we’re thinking. All it’s doing is trying to predict what the next word should be given all the previous words that have been said,” Awadallah explains. If he had to give this occurrence a name, he would call it a ‘confabulation.’ Confabulations are essentially the addition of words or sentences that fill in the blanks in a way that makes the information look credible, even if it’s incorrect. “[AI models are] highly incentivized to answer any question. It doesn’t want to tell you, ‘I don’t know’,” says Awadallah.

Third, let’s not forget that the problem rests with the users, the personifies, the people who own French bulldogs and talk to them as though they were the favorite in a large family. Here’s the passage:

The danger here is that while some confabulations are easy to detect because they border on the absurd, most of the time an AI will present information that is very believable. And the more we begin to rely on AI to help us speed up productivity, the more we may take their seemingly believable responses at face value. This means companies need to be vigilant about including human oversight for every task an AI completes, dedicating more and not less time and resources.

The ending of the article is a remarkable statement; to wit:

As we edge closer and closer to eliminating AI confabulations, an interesting question to consider is, do we actually want AI to be factual and correct 100% of the time? Could limiting their responses also limit our ability to use them for creative tasks?

Let me answer the question: Yes, outputs should be presented and possibly scored; for example, 90 percent probable that the information is verifiable. Maybe emojis will work? Wow.

Stephen E Arnold, September 26, 2024

Discord: Following the Telegram Road Map?

September 26, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A couple of weeks ago, I presented some Telegram (the company in Dubai’s tax-free zone) information. My team and I created a timeline, a type of information display popular among investigators and intelligence analysts. The idea is that if one can look at events across a span of hours, days, months, or years in the case of Telegram, one can get some insight into what I call the “innovation cadence” of the entity, staff growth or loss, type of business activity in which the outfit engages, etc.

image

Some high-technology outfits follow road maps in circulation for a decade or more. Thanks, MSFT Copilot. Good enough.

I read “Discord Launches End-to-End Encrypted Voice and Video Chats.” This social media outfit is pushing forward with E2EE. Because the company is located in the US, the firm operates under the umbrella of US laws, rules, and regulations. Consequently, US government officials can obtain documents which request certain information from the company. I want to skip over this announcement and the E2EE system and methods which Discord is using or will use as it expands its services.

I want to raise the question, “Is Discord following the Telegram road map?” Telegram is, as you probably know, is not providing end-to-end encryption by default. In order to send a “secret” encrypted message, one has to click through several screens and send a message to a person who must be online to make the Telegram system work. However, Telegram provides less sophisticated methods of keeping messages private. These tactics include a split between public Groups and private Groups. Clever Telegram users can use Telegram as a back end from which to deliver ransomware or engage in commercial transactions. One of the important points to keep in mind is that US-based E2EE outfits have far fewer features than Telegram. Furthermore, our research suggests that Telegram indeed a plan. The company has learned from its initial attempt to create a crypto play. Now the “structure” of Telegram involves an “open” foundation with an alleged operation in Zug, Switzerland, which some describe as the crypto nerve center of central Europe. Plus, Telegram is busy trying to deploy a global version of the VKontakte (the Russian Facebook) for Telegram users, developers, crypto players, and tire kickers.

Several observations:

  1. Discord’s innovations are essentially variants of something Telegram’s engineers implemented years ago
  2. The Discord operation is based in the US which has quite different rules, laws, and tax regulations than Dubai
  3. Telegram is allegedly becoming more cooperative with law enforcement because the company wants to pull off an initial public offering.

Will Discord follow the Telegram road map, undertaking the really big plays; specifically, integrated crypto, an IPO, and orders of magnitude more features and functional capabilities?

I don’t know the answer to this question, but E2EE seems to be a buzzword that is gaining traction now that the AI craziness is beginning to lose some of its hyperbolicity. However, it is important to keep in mind that Telegram is pushing forward far more aggressively than US social media companies. As Telegram approaches one billion users, it could make inroads into the US and tip over some digital apple carts. The answer to my question is, “Probably not. US companies often ignore details about non-US  entities.” Perhaps Discord’s leadership should take a closer look at the Telegram operation which spans Discord functionality, YouTube hooks, open source tactics, its own crypto, and its recent social media unit?

Stephen E Arnold, September 26, 2024

AI Automation Has a Benefit … for Some

September 26, 2024

Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”

Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.

Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:

“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.

He said it was “too simplistic” to simply say new jobs would be created in the future.

‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”

The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.

Will that really happen? Maybe?

Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.

Whitney Grace, September 26, 2024

Google Rear Ends Microsoft on an EU Information Highway

September 25, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A couple of high-technology dinosaurs with big teeth and even bigger wallets are squabbling in a rather clever way. If the dispute escalates some of the smaller vehicles on the EU’s Information Superhighway are going to be affected by a remarkable collision. The orange newspaper published “Google Files Brussels Complaint against Microsoft Cloud Business.” On the surface, the story explains that “Google accuses Microsoft of locking customers into its Azure services, preventing them from easily switching to alternatives.”

image

Two very large and easily provoked dinosaurs are engaged in a contest in a court of law. Which will prevail, or will both end up with broken arms? Thanks, MSFT Copilot. I think you are the prettier dinosaur.

To put some bite into the allegation, Google aka Googzilla has:

filed an antitrust complaint in Brussels against Microsoft, alleging its Big Tech rival engages in unfair cloud computing practices that has led to a reduction in choice and an increase in prices… Google said Microsoft is “exploiting” its customers’ reliance on products such as its Windows software by imposing “steep penalties” on using rival cloud providers.

From my vantage point this looks like a rear ender; that is, Google — itself under considerable scrutiny by assorted governmental entities — has smacked into Microsoft, a veteran of EU regulatory penalties. Google explained to the monopoly officer that Microsoft was using discriminatory practices to prevent Google, AWS, and Alibaba from closing cloud computing deals.

In a conversation with some of my research team, several observations surfaced from what I would describe as a jaded group. Let me share several of these:

  1. Locking up business is precisely the “game” for US high-technology dinosaurs with big teeth and some China-affiliated outfit too. I believe the jargon for this business tactic is “lock in.” IBM allegedly found the play helpful when mainframes were the next big thing. Just try and move some government agencies or large financial institutions from their Big Iron to Chromebooks and see how the suggestion is greeted.,
  2. Google has called attention to the alleged illegal actions of Microsoft, bringing the Softies into the EU litigation gladiatorial arena.
  3. Information provided by Google may illustrate the alleged business practices so that when compared to the Google’s approach, Googzilla looks like the ideal golfing partner.
  4. Any question that US outfits like Google and Microsoft are just mom-and-pop businesses is definitively resolved.

My personal opinion is that Google wants to make certain that Microsoft is dragged into what will be expensive, slow, and probably business trajectory altering legal processes. Perhaps Satya and Sundar will testify as their mercenaries explain that both companies are not monopolies, not hindering competition, and love whales, small start ups, ethical behavior, and the rule of law.

Stephen E Arnold, September 25, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta