Eric Schmidt, Truth Teller at Stanford University, Bastion of Ethical Behavior

August 26, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted some of the quotes in assorted online posts about Eric Schmidt’s talk / interview at Stanford University. I wanted to share a transcript of the remarks. You can find the ASCII transcript on GitHub at this link. For those interested in how Silicon Valley concepts influence one’s view of appropriate behavior, this talk is a gem. Is it at the level of the Confessions of St. Augustine? Well, the content is darned close in my opinion. Students of Google’s decision making past and present may find some guideposts. Aspiring “leadership” type people may well find tips and tricks.

Stephen E Arnold, August 26, 2024

Meta Leadership: Thank you for That Question

August 26, 2024

Who needs the Dark Web when one has Facebook? We learn from The Hill, “Lawmakers Press Meta Over Illicit Drug Advertising Concerns.” Writer Sarah Fortinsky pulls highlights from the open letter a group of House representatives sent directly to Mark Zuckerberg. The rebuke follows a March report from The Wall Street Journal that Meta was under investigation for “facilitating the sale of illicit drugs.” Since that report, the lawmakers lament, Meta has continued to run such ads. We learn:

The Tech Transparency Project recently reported that it found more than 450 advertisements on those platforms that sell pharmaceuticals and other drugs in the last several months. ‘Meta appears to have continued to shirk its social responsibility and defy its own community guidelines. Protecting users online, especially children and teenagers, is one of our top priorities,’ the lawmakers wrote in their letter, which was signed by 19 lawmakers. ‘We are continuously concerned that Meta is not up to the task and this dereliction of duty needs to be addressed,’ they continued. Meta uses artificial intelligence to moderate content, but the Journal reported the company’s tools have not managed to detect the drug advertisements that bypass the system.”

The bipartisan representatives did not shy from accusing Meta of dragging its heels because it profits off these illicit ad campaigns:

“The lawmakers said it was ‘particularly egregious’ that the advertisements were ‘approved and monetized by Meta.’ … The lawmakers noted Meta repeatedly pushes back against their efforts to establish greater data privacy protections for users and makes the argument ‘that we would drastically disrupt this personalization you are providing,’ the lawmakers wrote. ‘If this personalization you are providing is pushing advertisements of illicit drugs to vulnerable Americans, then it is difficult for us to believe that you are not complicit in the trafficking of illicit drugs,’ they added.”

The letter includes a list of questions for Meta. There is a request for data on how many of these ads the company has discovered itself and how many it missed that were discovered by third parties. It also asks about the ad review process, how much money Meta has made off these ads, what measures are in place to guard against them, and how minors have interacted with them. The legislators also ask how Meta uses personal data to target these ads, a secret the company will surely resist disclosing. The letter gives Zuckerberg until September 6 to respond.

Cynthia Murrell, August 26, 2024

AI Snake Oil Hisses at AI

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Enthusiasm for certain types of novel software or gadgets rises and falls. The Microsoft marketing play with OpenAI marked the beginning of the smart software hype derby. Google got the message and flipped into Red Alert mode. Now about 20 months after Microsoft’s announcement about its AI tie up with Sam AI-Man, we have Google’s new combo: AI in a mobile phone. Bam! Job done. Slam dunk.

image

Thanks, MSFT Copilot. On top of the IPv6 issue? Oh, too bad.

I wonder if the Googlers were thinking along the same logical lines at the authors of “AI Companies Are Pivoting from Creating Gods to Building Products. Good.”

The snake oil? Dripping. Here’s a passage from the article I noted:

AI companies are collectively planning to spend a trillion dollars on hardware and data centers, but there’s been relatively little to show for it so far.

A trillion? That’s a decent number. Sam AI-Man wants more, but the scale is helpful, particularly when most numbers are mere billions in the zoom zoom world of smart software.

The most important item in the write up, in my opinion, is the list of five “challenges.” The article focuses on consumer AI. A couple of these apply to the enterprise sector as well. Let’s look at the five “challenges.” These are and, keep in mind, I a paraphrasing as dinobabies often do:

  1. Cost. In terms of consumers, one must consider making Hamster Kombat smart. (This is a Telegram dApp.) My team informed me that this little gem has 35 million users, and it is still growing. Imagine the computational cost to infuse each and every Hamster Kombat “game” player with AI goodness. But it’s a game and a distributed one at that, one might say. Someone has to pay for these cycles. And Hamster Kombat is not on the radar of most consumers’ radar. Telegram has about 950 million users, so 35 million users comes from that pool. What are the costs of AI infused games outside of a walled garden. And the hardware? And the optimization engineering? And the fooling around with ad deals? Costs are not a hurdle. Costs might be a Grand Canyon-scale leap into a financial mud bank.
  2. Reliability. Immature systems and methods, training content issues (real and synthetic), and the fancy math which uses a lot of probability procedures guarantees some interesting outputs.
  3. Privacy. The consumer or user facing services are immature. Developers want to get something to most work in a good enough manner. Then security may be discussed. But on to the next feature. As a result, I am not sure if anyone has a decent grasp of the security issues which smart software might pose. Look at Microsoft. It’s been around almost half a century, and I learn about new security problems every day. Is smart software different?
  4. Safety and security. This is a concomitant to privacy. Good luck knowing what the systems do or do not do.
  5. User interface. I am a dinobaby. The interfaces are pale, low contrast, and change depending on what a user clicks. I like stability. Smart software simply does not comprehend that word.

Good points. My view is that the obstacle to surmount is money. I am not sure that the big outfits anticipated the costs of their sally into the hallucinating world of AI. And what are those costs, pray tell. Here’s are selected items the financial managers at the Big Dogs are pondering along with the wording of their updated LinkedIn profile:

  • Litigation. Remarks by some icons of the high technology sector have done little to assuage the feelings of those whose content was used without permission or compensation. Some, some people. A few Big Dogs are paying cash to scrape.
  • Power. Yep, electricity, as EV owners know, is not really free.
  • Water, Yep, modern machines produce heat if what I learned in physics was actual factual.
  • People (until they can be replaced by a machine that does not require health care or engage in signing petitions).
  • Data and indexing. Yep, still around and expensive.
  • License fees. They are comin’ round the mountain of legal filings.
  • Meals, travel and lodging. Leadership will be testifying, probably a lot.
  • PR advisors and crisis consultants. See the first bullet, Litigation.

However, slowly but surely some commercial sectors are using smart software. There is an AI law firm. There are dermatologists letting AI determine what to cut, freeze, or ignore. And there are college professors using AI to help them do “original” work and create peer-review fodder.

There was a snake in the Garden of Eden, right?

Stephen E Arnold, August 23, 2024

Google Leadership Versus Valued Googlers

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_t[1]This essay is the work of a dumb dinobaby. No smart software required.

The summer in rural Kentucky lingers on. About 2,300 miles away from the Sundar & Prabhakar Comedy Show’s nerve center, the Alphabet Google YouTube DeepMind entity is also “cyclonic heating from chaotic employee motion.” What’s this mean? Unsteady waters? Heat stroke? Confusion? Hallucinations? My goodness.

The Google leadership faces another round of employee pushback. I read “Workers at Google DeepMind Push Company to Drop Military Contracts.

How could the Google smart software fail to predict this pattern? My view is that smart software has some limitations when it comes to managing AI wizards. Furthermore, Google senior managers have not been able to extract full knowledge value from the tools at their disposal to deal with complexity. Time Magazine reports:

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

Why are AI Googlers grousing about military work? My personal view is that the recent hagiography of Palantir’s Alex Karp and the tie up between Microsoft and Palantir for Impact Level 5 services means that the US government is gearing up to spend some big bucks for warfighting technology. Google wants — really needs — this revenue. Penalties for its frisky behavior as what Judge Mehta describes and “monopolistic” could put a hit in the git along of Google ad revenue. Therefore, Google’s smart software can meet the hunger militaries have for intelligent software to perform a wide variety of functions. As the Russian special operation makes clear, “meat based” warfare is somewhat inefficient. Ukrainian garage-built drones with some AI bolted on perform better than a wave of 18 year olds with rifles and a handful of bullets. The example which sticks in my mind is a Ukrainian drone spotting a Russian soldier in the field partially obscured by bushes. The individual is attending to nature’s call.l The drone spots the “shape” and explodes near the Russian infantry man.

image

A former consultant faces an interpersonal Waterloo. How did that work out for Napoleon? Thanks, MSFT Copilot. Are you guys working on the IPv6 issue? Busy weekend ahead?

Those who study warfare probably have their own ah-ha moment.

The Time Magazine write up adds:

Those principles state the company [Google/DeepMind] will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

I love it when wizards “believe” something.

Will the Sundar & Prabhakar brain trust do believing or banking revenue from government agencies eager to gain access to advantage artificial intelligence services and systems? My view is that the “believers” underestimate the uncertainty arising from potential sanctions, fines, or corporate deconstruction the decision of Judge Mehta presents.

The article adds this bit of color about the Sundar & Prabhakar response time to Googlers’ concern about warfighting applications:

The [objecting employees’] letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

“No meaningful response” suggests that the Alphabet Google YouTube DeepMind rhetoric is not satisfactory.

The write up concludes with this paragraph:

At a DeepMind town hall event in June, executives were asked to respond to the letter, according to three people with knowledge of the matter. DeepMind’s chief operating officer Lila Ibrahim answered the question. She told employees that DeepMind would not design or deploy any AI applications for weaponry or mass surveillance, and that Google Cloud customers were legally bound by the company’s terms of service and acceptable use policy, according to a set of notes taken during the meeting that were reviewed by TIME. Ibrahim added that she was proud of Google’s track record of advancing safe and responsible AI, and that it was the reason she chose to join, and stay at, the company.

With Microsoft and Palantir, among others, poised to capture some end-of-fiscal-year money from certain US government budgets, the comedy act’s headquarters’ planners want a piece of the action. How will the Sundar & Prabhakar Comedy Act handle the situation? Why procrastinate? Perhaps the comedy act hopes the issue will just go away. The complaining employees have short attention spans, rely on TikTok-type services for information, and can be terminated like other Googlers who grouse, picket, boycott the Foosball table, or quiet quit while working on a personal start up.

The approach worked reasonably well before Judge Mehta labeled Google a monopoly operation. It worked when ad dollars flowed like latte at Philz Coffee. But today is different, and the unsettled personnel are not a joke and add to the uncertainty some have about the Google we know and love.

Stephen E Arnold, August 23, 2024

Which Is It, City of Columbus: Corrupted or Not Corrupted Data

August 23, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I learned that Columbus, Ohio, suffered one of those cyber security missteps. But the good news is that I learned from the ever reliable Associated Press, “Mayor of Columbus, Ohio, Says Ransomware Attackers Stole Corrupted, Unusable Data.” But then I read the StateScoop story “Columbus, Ohio, Ransomware Data Might Not Be Corrupted After All.”

image

The answer is, “I don’t know.” Thanks, MSFT Copilot. Good enough.

The story is a groundhog day tale. A bad actor compromises a system. The bad actor delivers ransomware. The senior officers know little about ransomware and even less about the cyber security systems marketed as a proactive, intelligent defense against bad stuff like ransomware. My view, as you know, is that it is easier to create sales decks and marketing collateral than it is is to deliver cyber security software that works. Keep in mind that I am a dinobaby. I like products that under promise and over deliver. I like software that works, not sort of works or mostly works. Works. That’s it.

What’s interesting about Columbus other than its zoo, its annual flower festival, and the OCLC organization is that no one can agree on this issue. I believe this is a variation on the Bud Abbott and Lou Costello routine “Who’s on First.”

StateScoop’s story reported:

An anonymous cybersecurity expert told local news station WBNS Tuesday that the personal information of hundreds of thousands of Columbus residents is available on the dark web. The claim comes one day after Columbus Mayor Andrew Ginther announced to the public that the stolen data had been “corrupted” and most likely “unusable.” That assessment was based on recent findings of the city’s forensic investigation into the incident.

The article noted:

Last week, the city shared a fact sheet about the incident, which explains: “While the city continues to evaluate the data impacted, as of Friday August 9, 2024, our data mining efforts have not revealed that any of the dark web-posted data includes personally identifiable information.”

What are the lessons I have learned from these two stories about a security violation and ransomware extortion?

  1. Lousy cyber security is a result of indifferent (maybe lousy) management? How do I know? The City of Columbus cannot generate a consistent story.
  2. The compromised data were described in two different and opposite ways. The confusion underscores that the individuals involved are struggling with basic data processes. Who’s on first? I don’t know. No, he’s on third.
  3. The generalization that no one wants the data misses an important point. Data, once available, is of considerable interest to state actors who might be interested in the employees associated with either the university, Chemical Abstracts, or some other information-centric entity in Columbus, Ohio.

Net net: The incident is one more grim reminder of the vulnerabilities which “managers” choose to ignore or leave to people who may lack certain expertise. The fix may begin in the hiring process.

Stephen E Arnold, August 23, 2024

Phishers: Targeting Government Contract Shoemakers Who Do Not Have Shoes But Talk about Them

August 22, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The proverb "The shoemaker’s children go barefoot" has inspired some bad actors who phish for online credentials. The obvious targets, some might suggest, are executives at major US government agencies. Those individuals are indeed targets, but a number of bad actors have found ways to get a GS-9 to click on a link designed to steal credentials. An even more promising barrel containing lots of fish may be the vendors who sell professional services, including cyber security, to the US government agencies.

image

Of course, our systems are secure. Thanks, MSFT Copilot. How is Word doing today? Still crashing?

This Sophisticated New Phishing Campaign Is Going after US Government Contractors” explains:

Researchers from Perception Point revealed the “Uncle Scam” campaign bypasses security checks to deliver sophisticated phishing emails designed by LLMs to be extremely convincing. The attackers use advanced tools, including AI-powered phishing kits and the Microsoft Dynamics 365 platform, to execute convincing multi-step attacks.

The write up then reveals one of the key — maybe the principal key to success:

One of the key elements that makes this phishing campaign particularly effective is the abuse of Microsoft’s Dynamics 365 Marketing platform. The attackers leverage the domain "dyn365mktg.com," associated with Dynamics 365, to send out their malicious emails. Because this domain is pre-authenticated by Microsoft and complies with DKIM and SPF standards, phishing emails are more likely to bypass spam filters and reach the inboxes of unsuspecting recipients.

If I understand this statement, the recipient sees email with a pattern set up to suck credentials. Why would a government contractor click on such an email? The domain is “pre-authenticated by Microsoft.” If it looks like a duck and walks like a duck, the email must be a duck. Yes, it is a digital duck which is designed to take advantage of yet another “security” and “trust” facet of the Microsoft ecosystem.

I found this series of statements interesting. Once again, the same old truisms are trotted out to help a victim avoid a similar problem in the future. I quote:

To safeguard your organization from falling victim to sophisticated phishing attacks like "Uncle Scam," Perception Point recommends taking the following precautions:

  • Double-check the Sender’s Email: Always scrutinize the sender’s email address for any signs of impersonation.
  • Hover Before You Click: Before clicking any link, hover over it to reveal the actual URL and ensure it is legitimate. 
  • Look for Errors: Pay attention to minor grammatical mistakes, unusual phrasing, or inconsistencies in the email content.
  • Leverage Advanced Detection Tools: Implement AI-powered multi-layered security solutions to detect and neutralize sophisticated phishing attempts.
  • Educate Your Team: Regularly train employees on how to identify phishing emails and the importance of verifying unsolicited communications.
  • Trust Your Instincts: If an email or offer seems too good to be true, it probably is. Always verify the authenticity of such communications through trusted channels.

How well do these tips work in today’s government contractor workspace? Answer: Not too well.

The issue is the underlying software. The fix is going to be difficult to implement. Microsoft is working to make its systems more secure. The government contractors can make shoes in the form of engineering change orders, scope changes, and responses to RFQs which hit every requirement in the RFP. But many of those firms have assumed that the cyber security systems will do their job.

Ignorance is bliss. Maybe not for the compromised contractor, but the bad actors are enjoying the Uncle Scam play and may for years to come.

Stephen E Arnold, August 22, 2024

AI Balloon: Losing Air and Boring People

August 22, 2024

Though tech bros who went all-in on AI still promise huge breakthroughs just over the horizon, Windows Central’s Kevin Okemwa warns: “The Generative AI Bubble Might Burst, Sending the Tech to an Early Deathbed Before Its Prime: ‘Don’t Believe the Hype’.” Sadly, it is probably too late to save certain career paths, like coding, from an AI takeover. But perhaps a slowdown would conserve some valuable resources. Wouldn’t that be nice? The write-up observes:

“While AI has opened up the world to endless opportunities and untapped potential, its hype might be short-lived, with challenges abounding. Aside from its high water and power demands, recent studies show that AI might be a fad and further claim that 30% of its projects will be abandoned after proof of concept. Similar sentiments are echoed in a recent Blood In The Machine newsletter, which points out critical issues that might potentially lead to ‘the beginning of the end of the generative AI boom.’ From the Blood in the Machine newsletter analysis by Brian Merchant, who is also the Los Angeles Times’ technology columnist:

‘This is it. Generative AI, as a commercial tech phenomenon, has reached its apex. The hype is evaporating. The tech is too unreliable, too often. The vibes are terrible. The air is escaping from the bubble. To me, the question is more about whether the air will rush out all at once, sending the tech sector careening downward like a balloon that someone blew up, failed to tie off properly, and let go—or, more slowly, shrinking down to size in gradual sputters, while emitting embarrassing fart sounds, like a balloon being deliberately pinched around the opening by a smirking teenager.’”

Such evocative imagery. Merchant’s article also notes that, though Enterprise AI was meant to be the way AI firms made their money, it is turning out to be a dud. There are several reasons for this, not the least of which is AI models’ tendency to “hallucinate.”

Okemwa offers several points to support Merchant’s deflating-balloon claim. For example, Microsoft was recently criticized by investors for wasting their money on AI technology. Then there NVIDIA: The chipmaker recently became the most valuable company in the world thanks to astronomical demand for its hardware to power AI projects. However, a delay of its latest powerful chip dropped its stock’s value by 5%, and market experts suspect its value will continue to decline. The write-up also points to trouble at generative AI’s flagship firm, OpenAI. The company is plagued by a disturbing exodus of top executives, rumors of pending bankruptcy, and a pesky lawsuit from Elon Musk.

Speaking of Mr. Musk, how do those who say AI will kill us all respond to the potential AI downturn? Crickets.

Cynthia Murrell, August 22, 2024

Cyber Security Outfit Wants Its Competition to Be Better Fellow Travelers

August 21, 2024

green-dino_thumb_thumb_thumb_thumb_t[2]This essay is the work of a dumb dinobaby. No smart software required.

I read a write up which contains some lingo that is not typical Madison Avenue sales speak. The sort of odd orange newspaper published “CrowdStrike Hits Out at Rivals’ Shady Attacks after Global IT Outage.” [This is a paywalled story, gentle reader. Gone are the days when the orange newspaper was handed out in Midtown Manhattan.] CrowdStrike is a company with interesting origins. The firm has become a player in the cyber security market, and it has been remarkably successful. Microsoft — definitely a Grade A outfit focused on making system administrators’ live as calm as Lake Paseco on summer morning — allowed CrowdStrike to interact with the most secure component of its software.

What does the leader of CrowdStrike reveal? Let’s take a quick look at a point or two.

First, I noted this passage from the write up which seems a bit a proactive tactic to make sure those affected by the tiny misstep know that software is not perfect. I mean who knew?

CrowdStrike’s president hit out at “shady” efforts by its cyber security rivals to scare its customers and steal market share in the month since its botched software update sparked a global IT outage. Michael Sentonas told the Financial Times that attempts by competitors to use the July 19 disruption to promote their own products were “misguided”.

I am not sure what misguided means, but I think the idea is that competitors should not try to surf on the little ripples the CrowdStrike misstep caused. A few airline passengers were inconvenienced, sure. But that happens anyway. The people in hospitals whose surgeries were affected seem to be mostly okay in a statistical sense. And those interrupted financial transactions. No big deal. The market is chugging along.

image

Cyber vendors are ready and eager to help those with a problematic and possibly dangerous vehicle. Thanks, MSFT Copilot. Are you hands full today?

I also circled this passage:

SentinelOne chief executive Tomer Weingarten said the global shutdown was the result of “bad design decisions” and “risky architecture” at CrowdStrike, according to trade magazine CRN. Alex Stamos, SentinelOne’s chief information security officer, warned in a post on LinkedIn it was “dangerous” for CrowdStrike “to claim that any security product could have caused this kind of global outage”.

Yep, dangerous. Other vendors’ software are unlikely to create a CrowdStrike problem. I like this type of assertion. Also, I find the ambulance-chasing approach to closing deals and boosting revenue a normal part of some companies’ marketing. I think one outfit made FED or fear, uncertainty, and doubt a useful wrench in the firm’s deal-closing guide to hitting a sales target. As a dinobaby, I could be hallucinating like some of the smart software and the even smarter top dogs in cyber security companies.

I have to include this passage from the orange outfit’s write up:

Sentonas [a big dog at CrowdStrike], who this month went to Las Vegas to accept the Pwnie Award for Epic Fail at the 2024 security conference Def Con, dismissed fears that CrowdStrike’s market dominance would suffer long-term damage. “I am absolutely sure that we will become a much stronger organization on the back of something that should never have happened,” he said. “A lot of [customers] are saying, actually, you’re going to be the most battle-tested security product in the industry.”

The Def Con crowd was making fun of CrowdStrike for is inconsequential misstep. I assume CrowdStrike’s leadership realizes that the award is like a having the “old” Mad Magazine devote a cover to a topic.

My view is that [a] the incident will be forgotten. SolarWinds seems to be fading as an issue in the courts and in some experts’ List of Things to Worry About. [b] Microsoft and CrowdStrike can make marketing hay by pointing out that each company has addressed the “issue.” Life will be better going forward. And, [c] Competitors will have to work overtime to cope with a sales retention tactic more powerful than any PowerPoint or PR campaign — discounts, price cuts, and free upgrades to AI-infused systems.

But what about that headline? Will cyber security marketing firms change their sales lingo and tell the truth? Can one fill the tank of a hydrogen-powered vehicle in Eastern Kentucky?

PS. Buying cyber security, real-time alerts, and other gizmos allow an organization to think, “We are secure, right?”

Stephen E Arnold, August 21, 2024

Threat. What Threat? Google Does Not Behave Improperly. No No No.

August 21, 2024

green-dino_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Amazing write up from a true poohbah publication: “Google Threatened Tech Influencers Unless They Preferred the Pixel.” Even more amazing is the Googley response: “We missed the mark?”

image

Thanks, MSFT Copilot. Good enough.

Let’s think about this.

The poohbah publication reports:

A Pixel 9 review agreement required influencers to showcase the Pixel over competitors or have their relationship terminated. Google now says the language ‘missed the mark.’

What?

I thought Google was working overtime to build relationships and develop trust. I thought Google was characterized unfairly as a monopolist. I thought Google had some of that good old “Do no evil” DNA.

These misconceptions demonstrate how out of touch a dinobaby like me can be.

The write up points out:

The Verge has independently confirmed screenshots of the clause in this year’s Team Pixel agreement for the new Pixel phones, which various influencers began posting on X and Threads last night. The agreement tells participants they’re “expected to feature the Google Pixel device in place of any competitor mobile devices.” It also notes that “if it appears other brands are being preferred over the Pixel, we will need to cease the relationship between the brand and the creator.” The link to the form appears to have since been shut down.

Does that sound like a threat? As a dinobaby and non-influencer, I think the Google is just trying to prevent miscreants like those people posting information about Russia’s special operation from misinterpreting the Pixel gadgets. Look. Google was caught off guard and flipped into Code Red or whatever. Now the Gemini smart software is making virtually everyone’s life online better.

I think the Google is trying to be “honest.” The term, like the word “ethical”, can house many means. Consequently non-Googley phones, thoughts, ideas, and hallucinations are not permitted. Otherwise what? The write up explains:

Those terms certainly caused confusion online, with some assuming such terms apply to all product reviewers. However, that isn’t the case. Google’s official Pixel review program for publications like The Verge requires no such stipulations. (And, to be clear, The Verge would never accept such terms, in accordance with our ethics policy.)

The poohbah publication has ethics. That’s super.

Here’s the “last words” in the article about this issue that missed the mark:

Influencer is a broad term that encompasses all sorts of creators. Many influencers adhere to strict ethical standards, but many do not. The problem is there are no guidelines to follow and limited disclosure to help consumers if what they’re reading or watching was paid for in some way. The FTC is taking some steps to curtail fake and misleading reviews online, but as it stands right now, it can be hard for the average person to spot a genuine review from marketing. The Team Pixel program didn’t create this mess, but it is a sobering reflection of the murky state of online reviews.

Why would big outfits appear to threaten people? There are no consequences. And most people don’t care. Threats are enervating. There’s probably a course at Stanford University on the subject.

Net net: This is new behavior? Nope. It is characteristic of a largely unregulated outfit with lots of money which, at the present time, feels threatened. Why not do what’s necessary to remain wonderful, loved, and trusted. Or else!

Stephen E Arnold, August 21, 2024

Moving Quickly: School Cell Phone Bans

August 21, 2024

In a victory for common sense, 9to5Mac reports, “More Schools Banning Students from Using Smartphones During Class Time.” Proponents of bans argue they improve learning outcomes and reduce classroom disruption. To which we reply: well, duh. They also claim bans protect children from cyberbullying. Maybe. Writer Ben Lovejoy states:

“More schools are banning students from using smartphones in classes, with calls for a federal ban rather than the current mix of state laws. Apple’s home state of California is expected to be the next state to introduce a ban. Orlando has so far taken the toughest line, banning smartphone use during the entire day, and blocking access to social media networks on the school Wi-Fi. Worldwide, around one in four countries has implemented bans or restrictions on the use of smartphones in schools. A 9to5Mac poll conducted a year ago found strong support for the same happening in the US, with 73% in favor and only 21% opposed. … Within the US, four states have already implemented bans, or are in the process of doing so: Florida, Indiana, Louisiana, and South Carolina. Exact policies vary. Some schools allow phones to used during breaks, while the strictest insist that they are placed in lockers or other safe places at the beginning of the school day, and not retrieved until the end of the day.

“Cellphone-free education” laws in Minnesota and Ohio will go into effect next year. The governors of California, Virginia, and New York indicate their states may soon follow suit. Meanwhile, according to a survey by the National Parents Union, 70% of parents support bans. But most want students to have access to their phones during lunchtime and other official breaks. Whether just during class times or all day, it can be expensive to implement these policies.

“Pennsylvania recently allotted millions of dollars in grants for schools to purchase lockable bags to store pupils’ phones while Delaware recently allocated $250,000 for schools to test lockable phone pouches.”

Leaving phones at home is not an option—today’s parents would never stand for it. The days of being unable to reach one’s offspring for hours at a time are long gone. How did parents manage to live with that for thousands of years?

Cynthia Murrell, August 21, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta