The 10X Engineer? More Trouble Than They Are Worth

April 25, 2025

dino orange_thumb_thumb_thumbDinobaby, here. No smart software involved unlike some outfits. I did use Sam AI-Man’s art system to produce the illustration in the blog post.

I like it when I spot a dinobaby fellow traveler. That happened this morning (March 28, 2025) when I saw the headline “In Praise of Normal Engineers: A Software Engineer Argues Against the Myth of the 10x Engineer.”

The IEEE Spectrum article states:

I don’t have a problem with the idea that there are engineers who are 10 times as productive as other engineers. The problems I do have are twofold.

image

Everyone is amazed that the 10X engineer does amazing things. Does the fellow become the model for other engineers in the office? Not for the other engineers. But the boss loves this super performer. Thanks, OpenAI, good enough.

The two “problems” — note the word “problems” are:

  1. “Measuring productivity.” That is an understatement, not a problem. With “engineers” working from home or in my case a far off foreign country, a hospital waiting room, or playing video games six fee from me productivity is a slippery business.
  2. “Teams own software.” Alas, that is indeed true. In 1962, I used IBM manuals to “create” a way to index. The professor who paid me $3 / hour was thrilled. I kept doing this indexing thing until the fellow died when I started graduate school. Since then, whipping up software confections required “teams.” Why? I figured out that my indexing trick was pure good fortune. After that, I made darned sure there were other eyes and minds chugging along by my side.

The write up says:

A truly great engineering organization is one where perfectly normal, workaday software engineers, with decent skills and an ordinary amount of expertise, can consistently move fast, ship code, respond to users, understand the systems they’ve built, and move the business forward a little bit more, day by day, week by week.

I like this statement. And here’s another from the article:

The best engineering orgs are not the ones with the smartest, most experienced people in the world. They’re the ones where normal software engineers can consistently make progress, deliver value to users, and move the business forward. Places where engineers can have a large impact are a magnet for top performers. Nothing makes engineers happier than building things, solving problems, and making progress.

Happy workers are magnets.

Now  let’s come back to the 10X idea. I used to work at a company which provided nuclear engineering services to the US government and a handful of commercial firms engaged in the nuclear industry. We had a real live 10X type. He could crank out “stuff” with little effort. Among the 600 nuclear engineers employed at this organization, he was the 10X person. Everyone liked him, but he did not have much to say. In fact, his accent made what he said almost impenetrable. He just showed up every day in a plaid coat, doodled on a yellow pad, and handed dot points, a flow chart, or a calculation to another nuclear engineer and went back to doodling.

Absolutely no one at the nuclear engineering firm wanted to be a 10X engineer. From my years of working at this firm, he was a bit of a one-off. When suits visited, a small parade would troop up to his office on the second floor. He shared that with my close friend, Dr. James Terwilliger. Everyone would smile and look at the green board. Then they would troop out and off to lunch.

I think the presence of this 10X person was a plus for the company. The idea of trying to find another individual who could do the nuclear “stuff” like this fellow was laughable. For some reason, the 10X person liked me, and I got the informal job of accompanying to certain engagements. I left that outfit after several years to hook up with a blue chip consulting firm. I lost track of the 10X person, but I had the learnings necessary to recognize possible 10X types. That was a useful addition to my bag of survival tips as a minus 3 thinker.

Net net: The presence of a 10X is a plus. Ignoring the other 599 engineers is a grave mistake. The errors of this 10X approach are quite evident today: Unchecked privacy violations, monopolistic behaviors enabled by people who cannot set up a new mobile phone, and a distortion of what it means to be responsible, ethical, and moral.

The 10X concept is little more than a way to make the top one percent the reason for success. Their presence is a positive, but building to rely on 10X anything is one of the main contributing factors to the slow degradation of computer services, ease of use, and, in my opinion, social cohesion.

Engineers are important. The unicorn engineers are important. Balance is important. Without out balance “stuff” goes off the rails. And that’s where we are.

Stephen E Arnold, April xx, 2025

Management? Never Been Easier

April 4, 2025

Age-Old Management Techniques That Still Happen

Millennials and Gen Z focus on prioritizing mental health and not dealing with the same BS as earlier generations. Some might see these priorities as infantile, but even past generations didn’t like them. INC breaks down “4 Really Bad Management Behaviors That Drive Good Employees Away” and the article really drives home how these management techniques have been in place for eons.

There article’s author Marcel Schwantes starts with a very wise statement: “Not everyone in management should be there.” He then breaks down four poor management techniques…er…behaviors that employees have been fed up with since forever. The first is shooting down their workers’ ideas. This happens on a daily basis and no one in the modern workforce won’t experience this at some point in their careers. Sometimes ideas need to be shot down, but other times a different way of doing something is just what the doctor ordered.

The second is treating people like numbers. This is a totalitarian perspective that runs rampant in corporate America, China, and North Korea. Businesses want measurable results and bodies in places to fill quotas. This doesn’t always lead to good results and the number one thing companies learn is how many people they’ll turn over in a year.

Micromanaging is number three. Micromanagers believe everything has to be done by the book or no one can do anything better than them. They watch their employees like hawks, creating a stifling environment. This creates stress and more mistakes on part of the employees. No one thrives with micromanagers.

The last poor management behavior is hoarding information and only sharing on a need-to-know basis if any. When this technique thrives in a work environment, employees are working in the dark with a flash light that only turns on once in a while.

Schwantes finishes with:

“Want to keep your best people? Treat them like people. Connect with them. Support them. Give them what they need to thrive. Because when you invest in your team, serve their needs, and remove obstacles from their path, they’ll run through walls for you—and that’s how you grow into the leader everyone wants to work for.”

No matter how far technology and human advances, we’ll always be dealing with these management behaviors. Dan anyone else imagine cavemen…er…people dealing with these? They probably didn’t put ump them and whacked their managers over the head with a club.

Whitney Grace, April 4, 2025

Original Research: Not-So-Original Assertions about Content Appropriation

April 2, 2025

dino orange_thumbNo AI. Just a dinobaby sharing an observation about younger managers and their innocence.

The Social Science Research Council published the 30-plus page report “Beyond Public Access in LLM Pre-Training Data.” The subtitle reveals the principal finding: “Non-Public Non-Public Book Content in OpenAI’s Models.”

The write up states:

Using a legally obtained dataset of 34 copyrighted O’Reilly Media books, we apply the DE-COP membership inference attack method to investigate whether OpenAI’s large language models were trained on copyrighted content without consent. Our AUROC scores show that GPT-4o, OpenAI’s more recent and capable model, demonstrates strong recognition of paywalled O’Reilly book content (AUROC = 82%), compared to OpenAI’s earlier model GPT-3.5 Turbo. In contrast, GPT-3.5 Turbo shows greater relative recognition of publicly accessible O’Reilly book samples. GPT-4o Mini, as a much smaller model, shows no knowledge of public or non-public O’Reilly Media content when tested (AUROC ? 50%). Testing multiple models, with the same cutoff date, helps us account for potential language shifts over time that might bias our findings. These results highlight the urgent need for increased corporate transparency regarding pre-training data sources as a means to develop formal licensing frameworks for AI content training.

I want to mention that the DE-COP method provide one way to determine whether a specific data record was part of the training dataset for a machine learning model. The result of the SSRC’s research suggests that as OpenAI enhanced its models, the OpenAI organization appears to have used non-public content. Smaller (less capable OpenAI) models seemed to know less about the test set; the bigger models knew more.

The write up concludes that developers of smart software are sucking down data without obtaining permission or entering into licensing deals with publishers, copyright holders, or entities working on content not yet available as public content like this dinobaby blog post.

To sum up: A not-to-original assertion that certain organizations just take content and ignore rights is apparently accurate in this set of research. OpenAI accelerates. Speed is a positive. Breaking speed limits is believed to be no big deal.

A bit of a flashback: Over 40 years ago, I participated in a series of meetings to license the right to create abstracts of original articles from more than 1000 different publications. Our efforts were successful. Most publishers of academic business and management publications knew zero about online information services. The idea of summarizing an eight page article about executive compensation was unfamiliar to most of the people with whom our organization interacted over a period of years. Nevertheless, the owner of the database unit of the Courier Journal & Louisville Times Company insisted that we get signed agreements. The process was time consuming and expensive. Some outfits had representatives. So a negotiation went through layers of bureaucracy. Other publishers wanted a sit down meeting and a dog-and-pony show to explain why we wanted to boil down a 600 word article to about 150 words. There were worries about the impact of online access. There were worries about control of editorial content. There were worries about signing away rights. We couldn’t assuage worries, but we chugged along. The reputations of the Courier Journal and Barry Bingham Jr. helped but our small database operation had to follow the rules.

If the data in the cited document are accurate, the attitude of Barry Bingham Jr. would not have gained much traction at OpenAI. Instead of my getting a pat on the head for setting up the licensing deals with the able help of other Courier Journal professionals, I would have gotten kicked on the shin or worse.

Several observations:

  1. Observing content ownership remains important despite the acceleration toward illegal behaviors in the online sector
  2. Using show-and-tells and explaining the benefits of participating in digital innovations remains important. Knowledge minimizes anger and other negative reactions to pre-emptive behavior
  3. Managing the process is as important as demonstrating an unwillingness to conform to industry, social, and legal norms.

Let’s assume that the findings in the SSRC are off by 50 percent. Does that change what appears to be a strong possibility of some fancy dancing at the OpenAI disco?

My conclusion is that taking short cuts is part of the Silicon Valley approach of doing whatever and just saying, “Wow, I am sorry.”

I asked ChatGPT 4 via You.com this question:

Should I create an online product and just use content from the Harvard Business Review without permission?

Here’s the response:

No, you should not create an online product using content from the Harvard Business Review (HBR) without permission. Doing so would likely violate copyright laws and HBR’s usage policies. Here’s why:

HBR’s Copyright and Usage Policies
  1. HBR Material is Copyrighted: Harvard Business Review explicitly states that its content is protected by copyright, and no part of its publications may be reproduced without permission.
  2. Usage Restrictions: HBR has strict policies regarding how its content can be used. For example, even in academic settings, HBR articles cannot be freely distributed unless done through specific channels like creating course packs via "Harvard Business Publishing for Educators"

    Permission is Required: If you want to use HBR content, you must obtain explicit permission. This includes reproducing, distributing, or using their material in any form.

Legal and Ethical Implications
  • Copyright Infringement: Using HBR content without permission would likely constitute copyright infringement, which can result in legal action, fines, or other penalties.
  • Ethical Concerns: Beyond legal issues, using someone else’s intellectual property without authorization is unethical and could harm your reputation or business.
What You Should Do Instead
  1. Seek Permission: Contact Harvard Business Review to request permission to use their content. They may have licensing options available for your intended use.
  2. Create Original Content: Instead of using HBR’s material, consider creating your own original content inspired by their ideas (but without copying their text or proprietary materials).
  3. Use Public Domain or Licensed Content: Look for content that is in the public domain or available under a license (e.g., Creative Commons) that allows commercial use.
Conclusion

Using HBR content without permission is not only illegal but also against their explicit policies. To avoid legal and ethical issues, either seek proper licensing or create your own original material.

In this case, the OpenAI’s AI may be more intelligent that some people engaged in content acquisition at the firm. But I am a dinobaby. I try to follow the social, legal, and ethical rules as my parents who had only high school educations taught me. I am a dinobaby. A sad dinobaby.

Stephen E Arnold, April 2, 2025

Amazon: So Many Great Ideas

April 1, 2025

AWS puts its customers first. Well, those who pay for the premium support plan, anyway. A thread on Reddit complains, "AWS Blocking Troubleshooting Docs Behind Paid Premium Support Plan." Redditor Certain_Dog1960 writes:

"When did AWS decide that troubleshooting docs/articles require you to have a paid premium support plan….like seriously who thought this was a good idea?"

Good question. The comments and the screenshot of Amazon’s message make clear that the company’s idea of how to support customers is different from actual customers’ thoughts. However, Certain_Dog posted an encouraging update:

"The paywall has been taken down!!! :)"

Apparently customer outrage still makes a difference. Occasionally.

Cynthia Murrell, March 31, 2025

The Gentle Slide Down the Software Quality Framework

March 21, 2025

dino orange_thumb_thumb_thumb_thumb_thumb_thumb_thumbYep, another dinobaby original.

I listened to a podcast called “The WAN Show,” featuring a couple of technology buffs who sell T shirts, mugs, and screwdrivers. What was notable about the program which is available on Apple podcasts was the opening story. In a nutshell, the two fellows made clear some problems with Apple’s hardware. The key statement made by one of the fellows was, “I will pay my way to Cupertino and help you Apple engineers to fix the problems. I will do it for free.” A number of people younger than I believe that an individual can overcome a bureaucracy.

image

Someone is excited about taking the long slide down in software quality. Thanks, OpenAI, definitely good enough.

I forget about the comment and the pitch to buy a backpack until I read “Apple’s Software Quality Crisis: When Premium Hardware Meets Subpar Software.” The write up hit upon some of the WAN grousing and introduced a number of ideas about Apple’s management focus.

Here’s a comment from the write up I circled:

The performance issues don’t stop at sluggish response times. During these use cases, my iPad overheated, making it uncomfortable to hold or even rest the palm on, raising concerns about potential long-term hardware damage. What made this particularly frustrating is that these aren’t third-party applications pushing the hardware to its limits. These are Apple’s own applications that should be theoretically optimized for their hardware. After demonstrating the issues in person to Apple Store staff (that were courteous and professional), the support representative that was handling my case suggested a hardware replacement. However, after further discussion, we both concluded this was likely a software problem rather than a hardware defect.

To a dinobaby like me, I interpreted the passage as saying, “The problem can’t be fixed. Suck it up, buttercup.”

I then discovered more than 1,000 comments to the “Apple’s Software Quality Crisis” article. I scanned them and then turned to one of the ever reliable smart software systems to which I have access and asked, “What are the main themes of the 1,000 comments.

Here’s what the smart software output, and, please, keep in mind, that smart software hallucinates, goes bonkers, and if a product of Google, really has trouble with cheese-related prompts. The found points output are:

  • Persistent Bugs: Users report long-standing issues, such as date-handling errors in Contacts that have remained unresolved for years. ?
  • Declining User Experience: There’s a sentiment that recent design changes, like the macOS Settings app, have led to a less intuitive user experience. ?
  • Inconsistent Quality Across Platforms: Some users feel that Apple’s software quality has become comparable to other platforms, lacking the distinctiveness it once had.
  • Ineffective Bug Reporting: Concerns are raised about Apple’s bug reporting system, with users feeling their feedback doesn’t lead to timely fixes.

Okay, we have a sample based on one podcast, one blog essay, and a number of randos who have commented on the “Apple’s Software Quality Crisis” article. Let me offer several observations:

  1. Apple, like Amazon, Facebook (Metazuck or whatever), Google, and Microsoft cannot deliver software that does much more than achieve the status of “good enough.” Perhaps size and the limitations of humans contribute to this wide spread situation?
  2. The problem is not fixable because new software comes out and adds to the woes of the previous software. Therefore, the volume of problems go up and there is neither money nor time to pay down the technical debt. In my experience, this means that a slow descent on a quite fungible gradient occurs. The gravity of technical debt creates the issues the individuals complaining identify.
  3. The current economic and regulatory environment does not punish these organizations for their products and services. The companies’ managers chug along, chase their bonuses, and ignore the gentle drift to quite serious problems between the organizations and their customers.

So what? Sorry, I have no solutions. Many of the “fixes” require deep familiarity with origin software. Most fixes are wrappers because rewrites take too long or the information required to fix one thing and not break two others is not available.

Welcome, to the degrading status quo.

Stephen E Arnold, March 21, 2025

Management Insights Circa Spring 2025

March 18, 2025

dino orangeAnother dinobaby blog post. Eight decades and still thrilled when I point out foibles.

On a call today, one of the people asked, “Did you see that excellent leadership comes from ambivalence?” No, sorry. After my years at the blue chip consulting firm, I ignore those insights. Ambivalence. The motivated leader cares about money, the lawyers, the vacations, the big customer, and money. I think I have these in the correct order.

Imagine my surprise when I read another management breakthrough. Navigate to “Why Your ‘Harmonious’ Team Is Actually Failing.” The insight is that happy teams are in coffee shop mode. If one is not motivated by one of the factors I identified in the first paragraph of this essay, life will be like a drive-through smoothie shop. Kick back, let someone else do the work, and lap up that banana and tangerine goodie.

The write up reports about a management concept that is that one should strive for a roughie, maybe with a dollop of chocolate and some salted nuts. Get that blood pressure rising. Here’s a passage I noted:

… real psychological safety isn’t about avoiding conflict. It’s about creating an environment where challenging ideas makes the team stronger, not weaker.

The idea is interesting. I have learned that many workers, like helicopter parents, want to watch and avoid unnecessary conflicts, interactions, and dust ups. The write up slaps some psycho babble on this management insight. That’s perfect for academics on tenure track and talking to quite sensitive big spending clients. But often a more dynamic approach is necessary. If it is absent, there is a problem with the company. Hello, General Motors, Intel, and Boeing.

Stifle much?

The write up adds:

I’ve seen plenty of “nice” teams where everyone was polite, nobody rocked the boat, and meetings were painless. And almost all of those teams produced ok work. Why? Because critical thinking requires friction. Those teams weren’t actually harmonious—they were conflict-avoidant. The disagreements still existed; they just went underground. Engineers would nod in meetings then go back to their desks and code something completely different. Design flaws that everyone privately recognized would sail through reviews untouched. The real dysfunction wasn’t the lack of conflict—it was the lack of honest communication. Those teams weren’t failing because they disagreed too little; they were failing because they couldn’t disagree productively.

Who knew? Hello, General Motors, Intel, and Boeing.

Here’s the insight:

Here’s the weird thing I’ve found: teams that feel safe enough to hash things out actually have less nasty conflict over time. When small disagreements can be addressed head-on, they don’t turn into silent resentment or passive-aggressive BS. My best engineering teams were never the quiet ones—they were the ones where technical debates got spirited, where different perspectives were welcomed, and where we could disagree while still respecting each other.

The challenge is to avoid creating complacency.

Stephen E Arnold, March 18, 2025

Ah, Apple, Struggling with AI like Amazon, Google, et al

March 14, 2025

Hopping DinoThis blog post is the work of a humanoid dino baby. If you don’t know what a dinobaby is, you are not missing anything. Ask any 80 year old why don’t you?

Yes, it is Friday, March 14, 2025. Everyone needs a moment of amusement. I found this luscious apple bit and thought I would share it. Dinobabies like knowing how the world and Apple treats other dinobabies. You, as a younger humanoid, probably don’t care. Someday you will.

Grandmother Gets X-Rated Message after Apple AI Fail” reports:

A woman from Dunfermline has spoken of her shock after an Apple voice-to-text service mistakenly inserted a reference to sex – and an apparent insult – into a message left by a garage… An artificial intelligence (AI) powered service offered by Apple turned it into a text message which – to her surprise – asked if she been "able to have sex" before calling her a "piece of ****".

Not surprisingly, Apple did not respond to the BBC request for a comment. Unperturbed, the Beeb made some phone calls. According to the article:

An expert has told the BBC the AI system may have struggled in part because of the caller’s Scottish accent, but far more likely factors were the background noise at the garage and the fact he was reading off a script.

One BBC expert offered these reasons for the foul fouled message:

Peter Bell, a professor of speech technology at the University of Edinburgh, listened to the message left for Mrs Littlejohn. He suggested it was at the "challenging end for speech-to-text engines to deal with". He believes there are a number of factors which could have resulted in rogue transcription:

  • The fact it is over the telephone and, therefore, harder to hear
  • There is some background noise in the call
  • The way the garage worker speaks is like he is reading a prepared script rather than speaking in a natural way

"All of those factors contribute to the system doing badly, " he added. "The bigger question is why it outputs that kind of content.

I have a much simpler explanation. Like Microsoft, marketing is much easier than delivering something that works for humans. I am tempted to make fun of Apple Intelligence, conveniently abbreviated AI. I am tempted to point out that real world differences in the flow of Apple computers are not discernable when browsing Web pages or entering one’s iTunes password into the system several times a day.

Let’s be honest. Apple is big. Like Amazon (heaven help Alexa by the way), Google (the cheese fails are knee slappers, Sundar), and the kindergarten squabbling among Softies and OpenAI at Microsoft — Apple cannot “do” smart software at this time. Therefore, errors will occur.

On the other hand, perhaps the dinobaby who received the message is “a piece of ****"? Most dinobabies are.

Stephen E Arnold, March 14, 2025

Microsoft Leadership Will Be Replaced by AI… Yet

March 14, 2025

Whenever we hear the latest tech announcement, we believe it is doom and gloom for humanity. While fire, the wheel, the Industrial Revolution, and computers have yet to dismantle humanity, the jury is still out for AI. However, Gizmodo reports that Satya Nadella of Microsoft says we shouldn’t be worried about AI and it’s time to stop glorifying it, “Microsoft’s Satya Nadella Pumps the Brakes on AI Hype.” Nadella placed a damper on AI hype with the following statement from a podcast: “Success will be measured through tangible, global economic growth rather than arbitrary benchmarks of how well AI programs can complete challenges like obscure math puzzles. Those are interesting in isolation but do not have practical utility.”

Nadella said that technology workers are saying AI will replace humans, but that’s not the case. He calls that type of thinking a distraction and the tech industry needs to “get practical and just try and make money before investors get impatient.” Nadella’s fellow Microsoft worker CEO Sam Altman is a prime example of AI fear mongering. He uses it as a tool to give himself power.

Nadella continued that if the tech industry and its investors want AI growth akin to the Industrial Revolution then let’s concentrate in it. Proof of that type of growth would be if there was 10% inflation attributed to AI. Investing in AI can’t just happen on the supply side, there needs to be demand AI-built products.

Nadella’s statements are like a pouring a bucket of cold water on a sleeping person:

"On that sense, Nadella is trying to slap tech executives awake and tell them to cut out the hype. AI safety is somewhat of a concern—the models can be abused to create deepfakes or mass spam—but it exaggerates how powerful these systems are. Eventually, push will come to shove and the tech industry will have to prove that the world is willing to put down real money to use all these tools they are building. Right now, the use cases, like feeding product manuals into models to help customers search them faster, are marginal.”

Many well-known companies still plan on implementing AI despite their difficulties. Other companies have downsized their staffing to include more AI chatbots, but the bots prove to be inefficient and frustrating. Microsoft, however, is struggling with management issues related to OpenAI, its internal “experts,” and the Softies who think they can do better. (Did Microsoft ask Grok, “How do I manage this billions of dollar bonfire?”)

Let’s blame it on AI.

Whitney Grace, March 14, 2025, 2025

Next-Gen IT Professionals: Up for Doing a Good Job?

March 10, 2025

The entirety of the United States is facing a crisis when it comes to decent paying jobs. Businesses are watching their budgets like misers clutch their purse strings, so they’re hiring the cheapest tech workers possible. Medium explains that “8 Out Of 10 Senior Engineers Feel Undervalued: The Hidden Crisis In Tech’s Obsession With Junior Talent.”

Another term for budgeting and being cheaper is “cost optimization.” Experienced tech workers are being replaced with green newbies who wouldn’t know how to find errors if it was on the back of their hands. Or the experienced tech workers are bogged down by mentoring/fixing the mistakes of their younger associates.

It’s a recipe for disaster, but cost optimization is what businesses care about. There will be casualties in the trend, not all of them human:

“The silent casualties of this trend:

1. Systems designed by juniors who’ve never seen a server catch fire

2. Codebases that work right up until they dont

3. The quiet exodus of graybeards into early retirement”

Junior tech workers are cheaper, but it is difficult to just ask smart software to impart experience in a couple hundred words. Businesses are also treating their seasoned employees like they are mentors:

“I’m all for mentoring. But when companies treat seniors as:

  • Free coding bootcamp instructors
  • Human linters for junior code
  • On-call explainers of basic algorithms

…they’re not paying for mentorship. They’re subsidizing cheap labor with senior salaries.”

There’s a happy medium where having experienced tech experts work with junior tech associates can be beneficial for those involved. It is cheaper to dump the dinobabies and assume that those old systems can be fixed when they go south.

Whitney Grace, March 10, 2025

Sergey Says: Work Like It Was 1975 at McKinsey or Booz, Allen

March 6, 2025

dino orange_thumbYep, another dinobaby original.

Sergey Brin, invigorated with his work at the Google on smart software, has provided some management and work life tips to today’s job hunters and aspiring Alphabet employees. “In Leaked Memo to Google’s AI Workers, Sergey Brin Says 60 hours a Week Is the Sweet Spot and Doing the Bare Minimum Can Demoralize Peers”, Mr. Brin offers his view of sage management and career advice. (I do want to point out that the write up does not reference the work ethic and other related interactions of the Google Glass marketing team. My view of this facet of Mr. Brin’s contributions suggest that it is tough to put in 60 hours a week while an employee is ensconced in the Stanford Medical psychiatric ward. But that’s water under the bridge, so let’s move on to the current wisdom.)

The write up reports:

Sergey Brin believes Google can win the race to artificial general intelligence and outlined his ideas for how to do that—including a workweek that’s 50% longer than the standard 40 hours.

Presumably working harder will allow Google to avoid cheese mishaps related to pizza and Super Bowl advertising. Harder working Googlers will allow the company to avoid the missteps which have allowed unenlightened regulators in the European Union and the US to find the company exercising behavior which is not in the best interest of the service’s “users.”

The write up says:

“A number of folks work less than 60 hours and a small number put in the bare minimum to get by,” Brin wrote on Wednesday. “This last group is not only unproductive but also can be highly demoralizing to everyone else.”

I wonder if a consistent, document method for reviewing the work of employees would allow management to offer training, counseling, or incentives to get the mavericks back in the herd.

The protests, the allegations of erratic punitive actions like firing people who use words like “stochastic”, and the fact that the 60-hour information comes from a leaked memo — each of these incidents suggests that the management of the Google may have some work to do. You know, that old nosce teipsum stuff.

The Fortune write up winds down with this statement:

Last year, he acknowledged that he “kind of came out of retirement just because the trajectory of AI is so exciting.” That also coincided with some high-profile gaffes in Gemini’s AI, including an image generator that produced racially diverse Na#is. [Editor note: Member of a German affiliation group in the 1930s and 1940s. I have to avoid the Google stop words list.]

And the cheese, the Google Glass marketing tours, and so much more.

Stephen E Arnold, March 6, 2025

Next Page »

  • Archives

  • Recent Posts

  • Meta