Microsoft Security: A World First

September 30, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]This essay is the work of a dumb dinobaby. No smart software required.

After the somewhat critical comments of the chief information security officer for the US, Microsoft said it would do better security. “Secure Future Initiative” is a 25 page document which contains some interesting comments. Let’s look at a handful.

image

Some bad actors just go where the pickings are the easiest. Thanks, MSFT Copilot. Good enough.

On page 2 I noted the record beating Microsoft has completed:

Our engineering teams quickly dedicated the equivalent of 34,000 full-time engineers to address the highest priority security tasks—the largest cybersecurity engineering project in history.

Microsoft is a large software company. It has large security issues. Therefore, the company undertaken the “largest cyber security engineering project in history.” That’s great for the Guinness Book of World Records. The question is, “Why?” The answer, it seems to me, is, “Microsoft did “good enough” security. As the US government’s report stated, “Nope. Not good enough.” Hence, a big and expensive series of changes. Have the changes been tested or have unexpected security issues been introduced to the sprawl of Microsoft software? Another question from this dinobaby: “Can a big company doing good enough security implement fixes to remediate “the highest priority security tasks”? Companies have difficulty changing certain work practices. Can “good enough” methods do the job?

On page 3:

Security added as a core priority for all employees, measured against all performance reviews. Microsoft’s senior leadership team’s compensation is now tied to security performance

Compensation is lined to security as a “core priority.” I am not sure what making something a “core priority” means, particularly when the organization has implement security systems and methods which have been found wanting. When the US government gives a bad report card, one forms an impression of a fairly deep hole which needs to be filled with functional, reliable bits. Adding a “core priority” does not correlate with security software from cloud to desktop.

On page 5:

To enhance governance, we have established a new Cybersecurity Governance Council…

The creation of a council and adding security responsibilities to some executives and hiring a few other means to me:

  1. Meetings and delays
  2. Adding duties may translate to other issues
  3. How much will these remediating processes cost?

Microsoft may be too big to change its culture in a timely manner. The time required for a council to enhance governance means fixing security problems may take time. Even with additional time and “the equivalent of 34,000 full time engineers” may be a project management task of more than modest proportions.

On page 7:

Secure by design

Quite a subhead. How can Microsoft’s sweep of legacy and now products be made secure by design when these products have been shown to be insecure.

On page 10:

Our strategy for delivering enduring compliance with the standard is to identify how we will Start Right, Stay Right, and Get Right for each standard, which are then driven programmatically through dashboard driven reviews.

The alliteration is notable. However, what is “right”? What happens when fixing up existing issues and adhering to a “standard” find that a “standard” has changed. The complexity of management and the process of getting something “right” is like an example from a book from a Santa Fe Institute complexity book. The reality of addressing known security issues and conforming to standards which may change is interesting to contemplate. Words are great but remediating what’s wrong in a dynamic and very complicated series of dependent services is likely to be a challenge. Bad actors will quickly probe for new issues. Generally speaking, bad actors find faults and exploit them. Thus, Microsoft will find itself in a troublesome mode: Permanent reactions to previously unknown and new security issues.

On page 11, the security manifesto launches into “pillars.” I think the idea is that good security is built upon strong foundations. But when remediating “as is” code as well as legacy code, how long will the design, engineering, and construction of the pillars take? Months, years, decades, or multiple decades. The US CISO report card may not apply to certain time scales; for instance, big government contracts. Pillars are ideas.

Let’s look at one:

The monitor and detect threats pillar focuses on ensuring that all assets within Microsoft production infrastructure and services are emitting security logs in a standardized format that are accessible from a centralized data system for both effective threat hunting/investigation and monitoring purposes. This pillar also emphasizes the development of robust detection capabilities and processes to rapidly identify and respond to any anomalous access, behavior, and configuration.

The reality of today’s world is that security issues can arise from insiders. Outside threats seem to be identified each week. However, different cyber security firms identify and analyze different security issues. No one cyber security company is delivering 100 percent foolproof threat identification. “Logs” are great; however, Microsoft used to charge for making a logging function available to a customer. Now more logs. The problem is that logs help identify a breach; that is, a previously unknown vulnerability is exploited or an old vulnerability makes its way into a Microsoft system by a user action. How can a company which has a poor report card issued by the US government become the firm with a threat detection system which is the equivalent of products now available from established vendors. The recent CrowdStrike misstep illustrates that the Microsoft culture created the opportunity for the procedural mistake someone made at Crowdstrike. The words are nice, but I am not that confident in Microsoft’s ability to build this pillar. Microsoft may have to punt and buy several competitive systems and deploy them like mercenaries to protect the unmotivated Roman citizens in a century.

I think reading the “Secure Future Initiative” is a useful exercise. Manifestos can add juice to a mission. However, can the troops deliver a victory over the bad actors who swarm to Microsoft systems and services because good enough is like a fried chicken leg to a colony of ants.

Stephen E Arnold, September 30, 2024

Salesforce: AI Dreams

September 30, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Big Tech companies are heavily investing in AI technology, including Salesforce. Salesforce CEO Marc Benioff delivered a keynote about his company’s future and the end of an era as reported by Constellation Research: “Salesforce Dreamforce 2024: Takeaways On Agentic AI, Platform, End Of Copilot Era.” Benioff described the copilot era as “hit or miss” and he wants to focus on agentic AI powered by Salesforce.

Constellation Research analyst Doug Henschen said that Benioff made compelling case for Salesforce and Data Cloud being the platform that companies will use to build their AI agents. Salesforce already has metadata, data, app business logic knowledge, and more already programmed in it. While Dream Cloud has data integrated from third-party data clouds and ingested from external apps. Combining these components into one platform without DIY is a very appealing product.

Benioff and his team revamped Salesforce to be less a series of clouds that run independently and more of a bunch of clouds that work together in a native system. It means Salesforce will scale Agentforce across Marketing, Commerce, Sales, Revenue and Service Clouds as well as Tableau.

The new AI Salesforce wants to delete DIY says Benioff:

“‘ DIY means I’m just putting it all together on my own. But I don’t think you can DIY this. You want a single, professionally managed, secure, reliable, available platform. You want the ability to deploy this Agentforce capability across all of these people that are so important for your company. We all have struggled in the last two years with this vision of copilots and LLMs. Why are we doing that? We can move from chatbots to copilots to this new Agentforce world, and it’s going to know your business, plan, reason and take action on your behalf.

It’s about the Salesforce platform, and it’s about our core mantra at Salesforce, which is, you don’t want to DIY it. This is why we started this company.’”

Benioff has big plans for Salesforce and based off this Dreamforce keynote it will succeed. However, AI is still experimental. AI is smart but a human is still easier to work with. Salesforce should consider teaming AI with real people for the ultimate solution.

Whitney Grace, September 30, 2024

Social Media: A Glimpse of What Ignorance Fosters

September 27, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The US Federal Trade Commission has published a free report. “A Look Behind the Screens Examining the Data Practices of Social Media and Video Streaming Services” is about 80 pages comprising the actual report. The document covers:

  • A legal framework for social media and streaming services
  • Some basic information about the companies mentioned in the report
  • The data “practices” of the companies (I would  have preferred the phrase “procedures and exploitation”)
  • Advertising practices (my suggestion is to name the section “revenue generation and maximization”)
  • Algorithms, Data Analytics, or AI
  • Children and teens

The document includes comments about competition (er, what?), some conclusions, and FTC staff recommendations.

From the git-go, the document uses an acronym: SMVSSs which represents Social Media and Video Streaming Services. The section headings summarize the scope of the document. The findings are ones which struck me as fairly obvious; specifically:

  • People have no idea how much data are collected, analyzed, and monetized
  • Revenue is generated by selling ad which hook to the user data
  • Lots of software (dumb and smart) are required to make the cost of operations as efficient as possible
  • Children’s system use and their data are part of the game plan.

The report presents assorted “should do” and “must do.” These too are obvious; for example, “Companies should implement policies that would ensure greater protection of children and teens.”

I am a dinobaby. Commercial enterprises are going to do what produces revenue and market reach. “Should” and “would” are nice verbs. Without rules and regulations the companies just do what companies do. Consequences were needed more than two decades ago. Now the idea of “fixing up” social media is an idea which begs for reasonable solutions. Some countries just block US social media outfits; others infiltrate the organizations and use them and the data as an extension of a regime’s capabilities. A few countries think that revenue and growth are just dandy. Do you live in one of these nation states?

Net net: Worth reading. I want a T shirt that says SMVSSs.

Stephen E Arnold, September 27, 2024

Solana: Emulating Telegram after a Multi-Year Delay

September 27, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted an interesting example of Telegram emulation. My experience is that most online centric professionals have a general awareness of Telegram. Its more than 125 features and functions are lost in the haze of social media posts, podcasts, and “real” news generated by some humanoids and a growing number of gradient descent based software.

I think the information in “What Is the Solana Seeker Web3 Mobile Device” is worth noting. Why? I will list several reasons at the end of this short write up about a new must have device for some crypto sensitive professionals.

The Solana Seeker is a gizmo that embodies Web3 goodness. Solana was set up to enable the Solana blockchain platform. The wizards behind the firm were Anatoly Yakovenko and Raj Gokal. The duo set up Solana Labs and then shaped what is becoming the go-to organization Lego block for assorted crypto plays: The Solana Foundation. This non-profit organization has made its Proof of History technology into the fires heating the boilers of another coin or currency or New Age financial revolution. I am never sure what emerges from these plays. The idea is to make smart contracts work and enable decentralized finance. The goals include making money, creating new digital experiences to make money, and cash in on those to whom click-based games are a slick way to make money. Did I mention money as a motivator?

image

A hypothetical conversation between two crypto currency and blockchain experts. What could go wrong? Thanks, MSFT Copilot. Good enough.

How can you interact with the Solana environment? The answer is to purchase an Android-based digital device. The Seeker allows anyone to have the Solana ecosystem in one’s pocket. From my dinobaby’s point of view, we have another device designed to obfuscate certain activities. I assume Solana will disagree with my assessment, but things crypto evoke things at odds with some countries’ rules and regulations.

The cited article points out that the device is a YAAP (yet another Android phone). The big feature seems to be the Seed Vault wallet. In addition to the usual razzle dazzle about security, the Seeker lets a crypto holder participate in transactions with a couple of taps. The Seeker interface is to make crypto activities smoother and easier. Solana has like other mobile vendors created its own online store. When you buy a Seeker, you get a special token. The description I am referencing descends into crypto babble very similar to the lingo used by the Telegram One Network Foundation. The problem is that Telegram has about a billion users and is in the news because French authorities took action to corral the cowboy Russian-born Pavel Durov for some of his behaviors France found objectionable.

Can anyone get into the generic Android device business, do some fiddling, and deploy a specialized device? The answer is, “Yep.” If you are curious, just navigate to Alibaba.com and search for generic cell phones. You have to buy 3,000 or more, but the price is right: About US$70 per piece. Tip: Life is easier if you have an intermediary based in Bangkok or Singapore.

Let’s address the reasons this announcement is important to a dinobaby like me:

  1. Solana, like Meta (Facebook) is following in Telegram’s footprints. Granted, it has taken these two example companies years to catch on to the Telegram “play”, but movement is underway. If you are a cyber investigator, this emulation of Telegram will have significant implications in 2025 and beyond.
  2. The more off-brand devices there are, the easier it becomes for intelligence professionals to modify some of these gizmos. The reports of pagers, solar panels, and answering machines behaving in an unexpected manner goes from surprise to someone asking, “Do you know what’s in your digital wallet?”
  3. The notion of a baked in, super secret enclave for the digital cash provides an ideal way to add secure messaging or software to enable a network in a network in the manner of some military communication services. The patents are publicly available, and they make replication in the realm of possibility.

Net net: Whether the Seeker flies or flops is irrelevant. Monkey see, monkey do. A Telegram technology road map makes interesting reading, and it presages the future of some crypto activities. If you want to know more about our Telegram Road Map, write benkent2020 at yahoo.com.

Stephen E Arnold, September 27, 2024

Stupidity: Under-Valued

September 27, 2024

We’re taught from a young age that being stupid is bad. The stupid kids don’t move onto higher grades and they’re ridiculed on the playground. We’re also fearful of showing our stupidity, which often goes hand and hand with ignorance. These cause embarrassment and fear, but Math For Love has a different perspective: “The Centrality Of Stupidity In Mathematics.”

Math For Love is a Web site dedicated to revolutionizing how math is taught. They have games, curriculum, and more demonstrating how beautiful and fun math is. Math is one of those subjects that makes a lot of people feel dumb, especially the higher levels. The Math For Love team referenced an essay by Martin A. Schwartz called, “The Importance Of Stupidity In Scientific Research.”

Schwartz is a microbiologist and professor at the University of Virginia. In his essay, he expounds on how modern academia makes people feel stupid.

The stupid feeling is one of inferiority. It’s problem. We’re made to believe that doctors, engineers, scientists, teachers, and other smart people never experienced any difficulty. Schwartz points out that students (and humanity) need to learn that research is extremely hard. No one starts out at the top. He also says that they need to be taught how to be productively stupid, i.e. if you don’t feel stupid then you’re not really trying.

Humans are meant to feel stupid, otherwise they wouldn’t investigate, explore, or experiment. There’s an entire era in western history about overcoming stupidity: the Enlightenment. Math For Love explains that stupidity relative for age and once a child grows they overcome certain stupidity levels aka ignorance. Kids gain the comprehension about an idea, then can apply it to life. It’s the literal meaning of the euphemism: once a mind has been stretched it can’t go back to its original size.

“I’ve come to believe that one of the best ways to address the centrality of stupidity is to take on two opposing efforts at once: you need to assure students that they are not stupid, while at the same time communicating that feeling like they are stupid is totally natural. The message isn’t that they shouldnt be feeling stupid – that denies their honest feeling to learning the subject. The message is that of course they’re feeling stupid… that’s how everyone has to feel in order to learn math!:

Add some warm feelings to the equation and subtract self-consciousness, multiply by practice, and divide by intelligence level. That will round out stupidity and make it productive.

Whitney Grace, September 27, 2024

AI Maybe Should Not Be Accurate, Correct, or Reliable?

September 26, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Okay, AI does not hallucinate. “AI” — whatever that means — does output incorrect, false, made up, and possibly problematic answers. The buzzword “hallucinate” was cooked up by experts in artificial intelligence who do whatever they can to avoid talking about probabilities, human biases migrated into algorithms, and fiddling with the knobs and dials in the computational wonderland of an AI system like Google’s, OpenAI’s, et al. Even the book Why Machines Learn: The Elegant Math Behind Modern AI ends up tangled in math and jargon which may befuddle readers who stopped taking math after high school algebra or who has never thought about Orthogonal matrices.

The Next Web’s “AI Doesn’t Hallucinate — Why Attributing Human Traits to Tech Is Users’ Biggest Pitfall” is an interesting write up. On one hand, it probably captures the attitude of those who just love that AI goodness by blaming humans for anthropomorphizing smart software. On the other hand, the AI systems with which I have interacted output content that is wrong or wonky. I admit that I ask the systems to which I have access for information on topics about which I have some knowledge. Keep in mind that I am an 80 year old dinobaby, and I view “knowledge” as something that comes from bright people working of projects, reading relevant books and articles, and conference presentations or meeting with subjects far from the best exercise leggings or how to get a Web page to the top of a Google results list.

Let’s look at two of the points in the article which caught my attention.

First, consider this passage which is a quote from and AI expert:

“Luckily, it’s not a very widespread problem. It only happens between 2% to maybe 10% of the time at the high end. But still, it can be very dangerous in a business environment. Imagine asking an AI system to diagnose a patient or land an aeroplane,” says Amr Awadallah, an AI expert who’s set to give a talk at VDS2024 on How Gen-AI is Transforming Business & Avoiding the Pitfalls.

Where does the 2 percent to 10 percent number come from? What methods were used to determine that content was off the mark? What was the sample size? Has bad output been tracked longitudinally for the tested systems? Ah, so many questions and zero answers. My take is that the jargon “hallucination” is coming back to bite AI experts on the ankle.

Second, what’s the fix? Not surprisingly, the way out of the problem is to rename “hallucination” to “confabulation”. That’s helpful. Here’s the passage I circled:

“It’s really attributing more to the AI than it is. It’s not thinking in the same way we’re thinking. All it’s doing is trying to predict what the next word should be given all the previous words that have been said,” Awadallah explains. If he had to give this occurrence a name, he would call it a ‘confabulation.’ Confabulations are essentially the addition of words or sentences that fill in the blanks in a way that makes the information look credible, even if it’s incorrect. “[AI models are] highly incentivized to answer any question. It doesn’t want to tell you, ‘I don’t know’,” says Awadallah.

Third, let’s not forget that the problem rests with the users, the personifies, the people who own French bulldogs and talk to them as though they were the favorite in a large family. Here’s the passage:

The danger here is that while some confabulations are easy to detect because they border on the absurd, most of the time an AI will present information that is very believable. And the more we begin to rely on AI to help us speed up productivity, the more we may take their seemingly believable responses at face value. This means companies need to be vigilant about including human oversight for every task an AI completes, dedicating more and not less time and resources.

The ending of the article is a remarkable statement; to wit:

As we edge closer and closer to eliminating AI confabulations, an interesting question to consider is, do we actually want AI to be factual and correct 100% of the time? Could limiting their responses also limit our ability to use them for creative tasks?

Let me answer the question: Yes, outputs should be presented and possibly scored; for example, 90 percent probable that the information is verifiable. Maybe emojis will work? Wow.

Stephen E Arnold, September 26, 2024

Discord: Following the Telegram Road Map?

September 26, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A couple of weeks ago, I presented some Telegram (the company in Dubai’s tax-free zone) information. My team and I created a timeline, a type of information display popular among investigators and intelligence analysts. The idea is that if one can look at events across a span of hours, days, months, or years in the case of Telegram, one can get some insight into what I call the “innovation cadence” of the entity, staff growth or loss, type of business activity in which the outfit engages, etc.

image

Some high-technology outfits follow road maps in circulation for a decade or more. Thanks, MSFT Copilot. Good enough.

I read “Discord Launches End-to-End Encrypted Voice and Video Chats.” This social media outfit is pushing forward with E2EE. Because the company is located in the US, the firm operates under the umbrella of US laws, rules, and regulations. Consequently, US government officials can obtain documents which request certain information from the company. I want to skip over this announcement and the E2EE system and methods which Discord is using or will use as it expands its services.

I want to raise the question, “Is Discord following the Telegram road map?” Telegram is, as you probably know, is not providing end-to-end encryption by default. In order to send a “secret” encrypted message, one has to click through several screens and send a message to a person who must be online to make the Telegram system work. However, Telegram provides less sophisticated methods of keeping messages private. These tactics include a split between public Groups and private Groups. Clever Telegram users can use Telegram as a back end from which to deliver ransomware or engage in commercial transactions. One of the important points to keep in mind is that US-based E2EE outfits have far fewer features than Telegram. Furthermore, our research suggests that Telegram indeed a plan. The company has learned from its initial attempt to create a crypto play. Now the “structure” of Telegram involves an “open” foundation with an alleged operation in Zug, Switzerland, which some describe as the crypto nerve center of central Europe. Plus, Telegram is busy trying to deploy a global version of the VKontakte (the Russian Facebook) for Telegram users, developers, crypto players, and tire kickers.

Several observations:

  1. Discord’s innovations are essentially variants of something Telegram’s engineers implemented years ago
  2. The Discord operation is based in the US which has quite different rules, laws, and tax regulations than Dubai
  3. Telegram is allegedly becoming more cooperative with law enforcement because the company wants to pull off an initial public offering.

Will Discord follow the Telegram road map, undertaking the really big plays; specifically, integrated crypto, an IPO, and orders of magnitude more features and functional capabilities?

I don’t know the answer to this question, but E2EE seems to be a buzzword that is gaining traction now that the AI craziness is beginning to lose some of its hyperbolicity. However, it is important to keep in mind that Telegram is pushing forward far more aggressively than US social media companies. As Telegram approaches one billion users, it could make inroads into the US and tip over some digital apple carts. The answer to my question is, “Probably not. US companies often ignore details about non-US  entities.” Perhaps Discord’s leadership should take a closer look at the Telegram operation which spans Discord functionality, YouTube hooks, open source tactics, its own crypto, and its recent social media unit?

Stephen E Arnold, September 26, 2024

AI Automation Has a Benefit … for Some

September 26, 2024

Humanity’s progress runs parallel to advancing technology. As technology advances, aspects of human society and culture are rendered obsolete and it is replaced with new things. Job automation is a huge part of this; past example are the Industrial Revolution and the implementation of computers. AI algorithms are set to make another part of the labor force defunct, but the BBC claims that might be beneficial to workers: “Klarna: AI Lets Us Cut Thousands Of Jobs-But Pay More.”

Klarna is a fintech company that provides online financial services and is described as a “buy now, pay later” company. Klarna plans to use AI to automate the majority of its workforce. The company’s leaders already canned 1200 employees and they plan to fire another 2000 as AI marketing and customer service is implemented. That leaves Klarna with a grand total of 1800 employees who will be paid more.

Klarna’s CEO Sebastian Siematkowski is putting a positive spin on cutting jobs by saying the remaining employees will receive larger salaries. While Siematkowski sees the benefits of AI, he does warn about AI’s downside and advises the government to do something. He said:

“ ‘I think politicians already today should consider whether there are other alternatives of how they could support people that may be effective,’ he told the Today programme, on BBC Radio 4.

He said it was “too simplistic” to simply say new jobs would be created in the future.

‘I mean, maybe you can become an influencer, but it’s hard to do so if you are 55-years-old,’ he said.”

The International Monetary Fund (IMF) predicts that 40% of all jobs will worsen in “overall equality” due to AI. As Klarna reduces its staff, the company will enter what is called “natural attrition” aka a hiring freeze. The remaining workforce will have bigger workloads. Siematkowski claims AI will eventually reduce those workloads.

Will that really happen? Maybe?

Will the remaining workers receive a pay raise or will that money go straight to the leaders’ pockets? Probably.

Whitney Grace, September 26, 2024

Google Rear Ends Microsoft on an EU Information Highway

September 25, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A couple of high-technology dinosaurs with big teeth and even bigger wallets are squabbling in a rather clever way. If the dispute escalates some of the smaller vehicles on the EU’s Information Superhighway are going to be affected by a remarkable collision. The orange newspaper published “Google Files Brussels Complaint against Microsoft Cloud Business.” On the surface, the story explains that “Google accuses Microsoft of locking customers into its Azure services, preventing them from easily switching to alternatives.”

image

Two very large and easily provoked dinosaurs are engaged in a contest in a court of law. Which will prevail, or will both end up with broken arms? Thanks, MSFT Copilot. I think you are the prettier dinosaur.

To put some bite into the allegation, Google aka Googzilla has:

filed an antitrust complaint in Brussels against Microsoft, alleging its Big Tech rival engages in unfair cloud computing practices that has led to a reduction in choice and an increase in prices… Google said Microsoft is “exploiting” its customers’ reliance on products such as its Windows software by imposing “steep penalties” on using rival cloud providers.

From my vantage point this looks like a rear ender; that is, Google — itself under considerable scrutiny by assorted governmental entities — has smacked into Microsoft, a veteran of EU regulatory penalties. Google explained to the monopoly officer that Microsoft was using discriminatory practices to prevent Google, AWS, and Alibaba from closing cloud computing deals.

In a conversation with some of my research team, several observations surfaced from what I would describe as a jaded group. Let me share several of these:

  1. Locking up business is precisely the “game” for US high-technology dinosaurs with big teeth and some China-affiliated outfit too. I believe the jargon for this business tactic is “lock in.” IBM allegedly found the play helpful when mainframes were the next big thing. Just try and move some government agencies or large financial institutions from their Big Iron to Chromebooks and see how the suggestion is greeted.,
  2. Google has called attention to the alleged illegal actions of Microsoft, bringing the Softies into the EU litigation gladiatorial arena.
  3. Information provided by Google may illustrate the alleged business practices so that when compared to the Google’s approach, Googzilla looks like the ideal golfing partner.
  4. Any question that US outfits like Google and Microsoft are just mom-and-pop businesses is definitively resolved.

My personal opinion is that Google wants to make certain that Microsoft is dragged into what will be expensive, slow, and probably business trajectory altering legal processes. Perhaps Satya and Sundar will testify as their mercenaries explain that both companies are not monopolies, not hindering competition, and love whales, small start ups, ethical behavior, and the rule of law.

Stephen E Arnold, September 25, 2024

The Zuck: Limited by Regulation. Is This a Surprise?

September 25, 2024

Privacy laws in the EU are having an effect on Meta’s actions in that region. That’s great. But what about the rest of the world? When pressed by Australian senators, a the company’s global privacy director Melinda Claybaugh fessed up. “Facebook Admits to Scraping Every Australian Adult User’s Public Photos and Posts to Train AI, with No Opt-Out Option,” reports ABC News. Journalist Jake Evans writes:

“Labor senator Tony Sheldon asked whether Meta had used Australian posts from as far back as 2007 to feed its AI products, to which Ms Claybaugh responded ‘we have not done that’. But that was quickly challenged by Greens senator David Shoebridge. Shoebridge: ‘The truth of the matter is that unless you have consciously set those posts to private since 2007, Meta has just decided that you will scrape all of the photos and all of the texts from every public post on Instagram or Facebook since 2007, unless there was a conscious decision to set them on private. That’s the reality, isn’t it? Claybaugh: ‘Correct.’ Ms Claybaugh added that accounts of people under 18 were not scraped, but when asked by Senator Sheldon whether public photos of his own children on his account would be scraped, Ms Claybaugh acknowledged they would. The Facebook representative could not answer whether the company scraped data from previous years of users who were now adults, but were under 18 when they created their accounts.”

Why do users in Australia not receive the same opt-out courtesy those in the EU enjoy? Simple, responds Ms. Claybaugh—their government has not required it. Not yet, anyway. But Privacy Act reforms are in the works there, a response to a 2020 review that found laws to be outdated. The updated legislation is expected to be announced in August—four years after the review was completed. Ah, the glacial pace of bureaucracy. Better late than never, one supposes.

Cynthia Murrell, September 25, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta