Does Google Follow Its Own Product Gameplan?
June 5, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
If I were to answer the question based on Google’s AI summaries, I would say, “Nope.” The latest joke added to the Sundar & Prabhakar Comedy Show is the one about pizza. Here’s the joke if I recall it correctly.
Sundar: Yo, Prabhakar, how do you keep cheese from slipping off a hot pizza?
Prabhakar: I don’t know. Please, tell me, oh gifted one.
Sundar: You have your cook mix it with non-toxic glue, faithful colleague.
Prabhakar: [Laughing loudly]. That’s a good one, luminescent soul.
Did Google muff the bunny with its high-profile smart software feature? To answer the question, I looked to the ever-objective Fast Company online publication. I found a write which appears to provide some helpful information. The article is called “Conduct Stellar User Research Even Faster with This Google Ventures Formula.” Google has game plans for creating MVPs or minimum viable products.
The confident comedians look concerned when someone in the audience throws a large tomato at the well-paid performers. Thanks, MSFT. Working on security or the AI PC today?
Let’s look at what one Google partner reveals as the equivalent of the formula for Coca-Cola or McDonald’s recipe for Big Mac sauce.
Here’s the game winning touchdown razzle dazzle:
- Use a bullseye customer sprint. The idea is to get five “customers” and show them three prototypes. Listen for pros and cons. Then debrief together in a “watch party.”
- Conduct sprints early. The idea is to get this feedback before “a team invests a lot of time, money, or reputational risk into building, launching, and marketing an MVP (that’s a minimum viable product, not necessarily a good or needed product I think).
- Keep research bite size. Avoid heavy duty research overkill is the way I interpret the Google speak. The idea is that massive research projects are not desirable. They are work. Nibble, don’t gobble, I assume.
- Keep the process simple. Keep the prototypes simple. Get those interviews. That’s fun. Plus, there is the “watch party”, remember?
Okay, now let’s think about what Google suggests are outliers or fiddled AI results. Why is Google AI telling people to eat a rock a day?
The “bullseye” baloney is bull output for sure. I am on reasonably firm ground because in Paris the Sundar & Prabhakar Comedy Act showed incorrect outputs from Google’s AI system. Then Google invented about a dozen variations on the theme of a scrambled egg at Google I/O. Now Google is faced with its AI system telling people dogs own hotels. No, some dogs live in hotels. Some dogs deliver outputs in hotels. Dogs do not own hotels unless it is in a crazy virtual reality headset created by Apple or Meta.
The write up uses the word “stellar” to describe this MVP product stuff. The reality is that Googlers are creating work for themselves. Listening to “customers” who know little about AI or anything other than buy ad-get traffic. The “stellar” part of the title is like the “quantum supremacy” horse feather assertion the company crafted.
Smart software can, when trained and managed, can do some useful things. However, the bullseye and quantum supremacy stuff is capable of producing social media memes, concern among some stakeholders, and evidence that Google cannot do anything useful at this time.
Maybe the company will get its act together? When it does, I will check out the next Sundar & Prabhakar Comedy Act. Maybe some of the jokes will work? Let’s hope they are more effective than the bull’s-eye method. (Sorry. I had to fix up the spelling, Google.)
Stephen E Arnold, June 5, 2024
AI Will Not Definitely, Certainly, Absolute Not Take Some Jobs. Whew. That Is News
June 3, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
Outfits like McKinsey & Co. are kicking the tires of smart software. Some bright young sprouts I have heard arrive with a penchant for AI systems to create summaries and output basic information on a subject the youthful masters of the universe do not know. Will consulting services firms, publishers, and customer service outfits embrace smart software? The answer is, “You bet your bippy.”
“Why?” Answer: Potential cost savings. Humanoids require vacations, health care, bonuses, pension contributions (ho ho ho), and an old-fashioned and inefficient five-day work week.
Cost reductions over time, cost controls in real time, and more consistent outputs mean that as long as smart software is good enough, the technologies will go through organizations with more efficiency than Union General William T. Sherman led some 60,000 soldiers on a 285-mile march from Atlanta to Savannah, Georgia. Thanks, MSFT Copilot. Working on security today?
Software is allegedly better, faster, and cheaper. Software, particularly AI, may not be better, faster, or cheaper. But once someone is fired, the enthusiasm to return to the fold may be diminished. Often the response is a semi-amusing and often negative video posted on social media.
“Here’s Why AI Probably Isn’t Coming for Your Job Anytime Soon” disagrees with my fairly conservative prediction that consulting, publishing, and some service outfits will be undergoing what I call “humanoid erosion” and “AI accretion.” The write up asserts:
We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence (AI) has begun to alter, even reverse, this evolution.
Okay, Econ 101. Wonderful. But… and there are some, of course. the write up says:
But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.
Yikes. I am not sure that AI is going in any direction. Senior managers are going toward reducing costs. “Good enough,” not excellence, is the high-water mark today.
Here’s another “but”:
But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.
I don’t remember much about economics. In fact, I only think about economics in terms of reducing costs and having more money for myself. Good old Adam wrote:
Wherever there is great property there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many.
When it comes to AI, inequality is baked in. The companies that are competing fiercely to dominate the core technology are not into equality. The senior managers who want to reduce costs associated with publishing, writing consulting reports based on business school baloney, or reviewing documents hunting for nuggets useful in a trial. AI is going into these and similar knowledge professions. Most of those knowledge workers will have an opportunity to find their future elsewhere. But what about in-take professionals in hospitals? What about dispatchers at trucking companies? What about government citizen service jobs? Sorry. Software is coming. Companies are developing orchestrator software to allow smart software to function across multiple related and inter-related tasks. Isn’t that what most work in a many organizations is?
Here’s another test question from Econ 101:
Discuss the meaning of “It was not by gold or by silver, but by labor, that all wealth of the world was originally purchased.” Give examples of how smart software will replace labor and generate more money for those who own the rights to digital gold or silver.
Send me you blue book answers within 24 hours. You must write in legible cursive. You are not permitted to use artificial intelligence in any form to answer this question which counts for 95 percent of your grade in Economics 102: Work in the Age of AI.
Stephen E Arnold, June 3, 2024
In the AI Race, Is Google Able to Win a Sprint to a Feature?
May 31, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
One would think that a sophisticated company with cash and skilled employees would avoid a mistake like shooting the CEO in the foot. The mishap has occurred again, and if it were captured in a TikTok, it would make an outstanding trailer for the Sundar & Prabhakar reprise of The Greatest Marketing Mistakes of the Year.
At age 25, which is quite the mileage when traveling on the Information Superhighway, the old timer is finding out that younger, speedier outfits may win a number of AI races. In the illustration, the Google runner seems stressed at the start of the race. Will the geezer win? Thanks, MidJourney. Good enough, which is the benchmark today I fear.
“Google Is Taking ‘Swift Action’ to Remove Inaccurate AI Overview Responses” explains that Google rolled out with some fanfare its AI Overviews. The idea is that smart software would just provide the “user” of the Google ad delivery machine with an answer to a query. Some people have found that the outputs are crazier than one would expect from a Big Tech outfit. The article states:
… Google says, “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. “We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback,” Google adds. “We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”
But others are much kinder. One notable example is Mashable’s “We Gave Google’s AI Overviews the Benefit of the Doubt. Here’s How They Did.” This estimable publication reported:
Were there weird hallucinations? Yes. Did they work just fine sometimes? Also yes.
The write up noted:
AI Overviews were a little worse in most of my test cases, but sometimes they were perfectly fine, and obviously you get them very fast, which is nice. The AI hallucinations I experienced weren’t going to steer me toward any danger.
Let’s step back and view the situation via several observations:
- Google’s big moment becomes a meme cemented to glue on pizza
- Does Google have a quality control process which flags obvious gaffes? Apparently not.
- Google management seems to suggest that humans have to intervene in a Google “smart” process. Doesn’t that defeat the purpose of using smart software to replace some humans?
Net net: The Google is ageing, and I am not sure a singularity will offset these quite obvious effects of ageing, slowed corporate processes, and stuttering synapses in the revamped AI unit.
Stephen E Arnold, May 31, 2024
AI and the Workplace: Change Will Happen, Just Not the Way Some Think
May 15, 2024
This essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.
I read “AI and the Workplace.” The essay contains observations related to smart software in the workplace. The idea is that employees who are savvy will experiment and try to use the technology within today’s work framework. I think that will happen just as the essay suggests. However, I think there is a larger, more significant impact that is easy to miss. Looking at today’s workplace is missing a more significant impact. Employees either [a] want to keep their job, [b] gain new skills and get a better job, or [c] quit to vegetate or become an entrepreneur. I understand.
The data in the report make clear that some employees are what I call change flexible; that is, these motivated individuals differentiate from others at work by learning and experimenting. Note that more than half the people in the “we don’t use AI” categories want to use AI.
These data come from the cited article and an outfit called Asana.
The other data in the report. Some employees get a productivity boost; others just chug along, occasionally getting some benefit from AI. The future, therefore, requires learning, double checking outputs, and accepting that it is early days for smart software. This makes sense; however, it misses where the big change will come.
In my view, the major shift will appear in companies founded now that AI is more widely available. These organizations will be crafted to make optimal use of smart software from the day the new idea takes shape. A new news organization might look like Grok News (the Elon Musk project) or the much reviled AdVon. But even these outfits are anchored in the past. Grok News just substitutes smart software (which hopefully will not kill its users) for old work processes and outputs. AdVon was a “rip and replace” tool for Sports Illustrated. That did not go particularly well in my opinion.
The big job impact will be on new organizational set ups with AI baked in. The types of people working at these organizations will not be from the lower 98 percent of the work force pool. I think the majority of employees who once expected to work in information processing or knowledge work will be like a 58 year old brand manager at a vape company. Job offers will not be easy to get and new companies might opt for smart software and search engine optimization marketing. How many workers will that require? Maybe zero. Someone on Fiverr.com will do the job for a couple of hundred dollars a month.
In my view, new companies won’t need workers who are not in the top tier of some high value expertise. Who needs a consulting team when one bright person with knowledge of orchestrating smart software is able to do the work of a marketing department, a product design unit, and a strategic planning unit? In fact, there may not be any “employees” in the sense of workers at a warehouse or a consulting firm like Deloitte.
Several observations are warranted:
- Predicting downstream impacts of a technology unfamiliar to a great many people is tricky and sometimes impossible. Who knew social media would spawn a renaissance in getting tattooed?
- Visualizing how an AI-centric start up is assembled is a challenge? I submit it won’t look like an insurance company today. What’s a Tesla repair station look like? The answer, “Not much.”
- Figuring out how to be one of the elite who gets a job means being perceived as “smart.” Unlike Alina Habba, I know that I cannot fake “smart.” How many people will work hard to maximize the return on their intelligence? The answer, in my experience, is, “Not too many, dinobaby.”
Looking at the future from within the framework of today’s datasphere distorts how one perceives impact. I don’t know what the future looks like, but it will have some quite different configurations than the companies today have. The future will arrive slowly and then it becomes the foundation of a further evolution. What’s the grandson of tomorrow’s AI firm look like? Beauty will be in the eye of the beholder.
Net net: Where will the never-to-be-employed find something meaningful to do?
Stephen E Arnold, May 15, 2024
Taming AI Requires a Combo of AskJeeves and Watson Methods
April 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I spotted a short item called “A Faster, Better Way to Prevent an AI Chatbot from Giving Toxic Responses.” The operative words from my point of view are “faster” and “better.” The write up reports (with a serious tone, of course):
Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.
Yep, AskJeeves created rules. As long as the users of the system asked a question for which there was a rule, the helpful servant worked; for example, What’s the weather in San Francisco? However, ask a question for which there was no rule, what happens? The search engine reality falls behind the marketing juice and gets shopped until a less magical version appears as Ask.com. And then there is IBM Watson. That system endeared itself to groups of physicians who were invited to answer IBM “experts’” questions about cancer treatments. I heard when Watson was in full medical-revolution mode that some docs in a certain Manhattan hospital used dirty words to express his view about the Watson method. Rumor or actual factual? I don’t know, but involving humans in making software smart can be fraught with challenges: Managerial and financial to name but two.
The write up says:
Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested. They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model. The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.
How much improvement? Does the training stick or does it demonstrate that charming “Bayesian drift” which allows the probabilities to go walk-about, nibble some magic mushrooms, and generate fantastical answers? How long did the process take? Was it iterative? So many questions, and so few answers.
But for this group of AI wizards, the future is curiosity-driven red-teaming. Presumably the smart software will not get lost, suffer heat stroke, and hallucinate. No toxicity, please.
Stephen E Arnold, April 15, 2024
Are Experts Misunderstanding Google Indexing?
April 12, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Google is not perfect. More and more people are learning that the mystics of Mountain View are working hard every day to deliver revenue. In order to produce more money and profit, one must use Rust to become twice as wonderful than a programmer who labors to make C++ sit up, bark, and roll over. This dispersal of the cloud of unknowing obfuscating the magic of the Google can be helpful. What’s puzzling to me is that what Google does catches people by surprise. For example, consider the “real” news presented in “Google Books Is Indexing AI-Generated Garbage.” The main idea strikes me as:
But one unintended outcome of Google Books indexing AI-generated text is its possible future inclusion in Google Ngram viewer. Google Ngram viewer is a search tool that charts the frequencies of words or phrases over the years in published books scanned by Google dating back to 1500 and up to 2019, the most recent update to the Google Books corpora. Google said that none of the AI-generated books I flagged are currently informing Ngram viewer results.
Thanks, Microsoft Copilot. I enjoyed learning that security is a team activity. Good enough again.
Indexing lousy content has been the core function of Google’s Web search system for decades. Search engine optimization generates information almost guaranteed to drag down how higher-value content is handled. If the flagship provides the navigation system to other ships in the fleet, won’t those vessels crash into bridges?
In order to remediate Google’s approach to indexing requires several basic steps. (I have in various ways shared these ideas with the estimable Google over the years. Guess what? No one cared, understood, and if the Googler understood, did not want to increase overhead costs. So what are these steps? I shall share them:
- Establish an editorial policy for content. Yep, this means that a system and method or systems and methods are needed to determine what content gets indexed.
- Explain the editorial policy and what a person or entity must do to get content processed and indexed by the Google, YouTube, Gemini, or whatever the mystics in Mountain View conjure into existence
- Include metadata with each content object so one knows the index date, the content object creation date, and similar information
- Operate in a consistent, professional manner over time. The “gee, we just killed that” is not part of the process. Sorry, mystics.
Let me offer several observations:
- Google, like any alleged monopoly, faces significant management challenges. Moving information within such an enterprise is difficult. For an organization with a Foosball culture, the task may be a bit outside the wheelhouse of most young people and individuals who are engineers, not presidents of fraternities or sororities.
- The organization is under stress. The pressure is financial because controlling the cost of the plumbing is a reasonably difficult undertaking. Second, there is technical pressure. Google itself made clear that it was in Red Alert mode and keeps adding flashing lights with each and every misstep the firm’s wizards make. These range from contentious relationships with mere governments to individual staff member who grumble via internal emails, angry Googler public utterances, or from observed behavior at conferences. Body language does speak sometimes.
- The approach to smart software is remarkable. Individuals in the UK pontificate. The Mountain View crowd reassures and smiles — a lot. (Personally I find those big, happy looks a bit tiresome, but that’s a dinobaby for you.)
Net net: The write up does not address the issue that Google happily exploits. The company lacks the mental rigor setting and applying editorial policies requires. SEO is good enough to index. Therefore, fake books are certainly A-OK for now.
Stephen E Arnold, April 12, 2024
Information: Cheap, Available, and Easy to Obtain
April 9, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I worked in Sillycon Valley and learned a few factoids I found somewhat new. Let me highlight three. First, a person with whom my firm had a business relationship told me, “Chinese people are Chinese for their entire life.” I interpreted this to mean that a person from China might live in Mountain View, but that individual had ties to his native land. That makes sense but, if true, the statement has interesting implications. Second, another person told me that there was a young person who could look at a circuit board and then reproduce it in sufficient detail to draw a schematic. This sounded crazy to me, but the individual took this person to meetings, discussed his company’s interest in upcoming products, and asked for briefings. With the delightful copying machine in tow, this person would have information about forthcoming hardware, specifically video and telecommunications devices. And, finally, via a colleague I learned of an individual who was a naturalized citizen and worked at a US national laboratory. That individual swapped hard drives in photocopy machines and provided them to a family member in his home town in Wuhan. Were these anecdotes true or false? I assumed each held a grain of truth because technology adepts from China and other countries comprised a significant percentage of the professionals I encountered.
Information flows freely in US companies and other organizational entities. Some people bring buckets and collect fresh, pure data. Thanks, MSFT Copilot. If anyone knows about security, you do. Good enough.
I thought of these anecdotes when I read an allegedly accurate “real” news story called “Linwei Ding Was a Google Software Engineer. He Was Also a Prolific Thief of Trade Secrets, Say Prosecutors.” The subtitle is a bit more spicy:
U.S. officials say some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.
The write up, which may be shaped by art history majors on a mission, states:
Court records say he had others badge him into Google buildings, making it appear as if he were coming to work. In fact, prosecutors say, he was marketing himself to Chinese companies as an expert in artificial intelligence — while stealing 500 files containing some of Google’s most important AI secrets…. His case illustrates what American officials say is an ongoing nightmare for U.S. economic and national security: Some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.
Several observations about these allegedly true statements are warranted this fine spring day in rural Kentucky:
- Some managers assume that when an employee or contractor signs a confidentiality agreement, the employee will abide by that document. The problem arises when the person shares information with a family member, a friend from school, or with a company paying for information. That assumption underscores what might be called “uninformed” or “naive” behavior.
- The language barrier and certain cultural norms lock out many people who assume idle chatter and obsequious behavior signals respect and conformity with what some might call “US business norms.” Cultural “blindness” is not uncommon.
- Individuals may possess technical expertise unknown to colleagues and contracting firms offering body shop services. Armed with knowledge of photocopiers in certain US government entities, swapping out a hard drive is no big deal. A failure to appreciate an ability to draw a circuit leads to similar ineptness when discussing confidential information.
America operates in a relatively open manner. I have lived and worked in other countries, and that openness often allows information to flow. Assumptions about behavior are not based on an understanding of the cultural norms of other countries.
Net net: The vulnerability is baked in. Therefore, information is often easy to get, difficult to keep privileged, and often aided by companies and government agencies. Is there a fix? No, not without a bit more managerial rigor in the US. Money talks, moving fast and breaking things makes sense to many, and information seeps, maybe floods, from the resulting cracks. Whom does one trust? My approach: Not too many people regardless of background, what people tell me, or what I believe as an often clueless American.
Stephen E Arnold, April 9, 2024
AI and Job Wage Friction
April 1, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read again “The Jobs Being Replaced by AI – An Analysis of 5M Freelancing Jobs,” published in February 2024 by Bloomberg (the outfit interested in fiddled firmware on motherboards). The main idea in the report is that AI boosted a number of freelance jobs. What are the jobs where AI has not (as yet) added friction to the money making process. Here’s the list of jobs NOT impeded by smart software:
Accounting
Backend development
Graphics design
Market research
Sales
Video editing and production
Web design
Web development
Other sources suggest that “Accounting” may be targeted by an AI-powered efficiency expert. I want to watch how this profession navigates the smart software in what is often a repetitive series of eye glazing steps.
Thanks, MSFT Copilot. How are doing doing with your reorganization? Running smoothly? Yeah. Smoothly.
Now to the meat of the report: What professions or jobs were the MOST affected by AI. From the cited write up, these are:
Customer service (the exciting, long suffering discipline of chatbots)
Social media marketing
Translation
Writing
The write up includes another telling chunk of data. AI has apparently had an impact on the amount of money some customers were willing to pay freelancers or gig workers. The jobs finding greater billing friction are:
Backend development
Market research
Sales
Translation
Video editing and production
Web development
Writing
The article contains quite a bit of related information. Please, consult the original for a number of almost unreadable graphics and tabular data. I do want to offer several observations:
- One consequence of AI, if the data in this report are close enough for horseshoes, is that smart software drives down what customers will pay for a wide range of human centric services. You don’t lose your job; you just get a taste of Victorian sweat shop management thinking
- Once smart software is perceived as reasonably capable, demand and pay for good enough translation, smart software is embraced. My view is that translation services are likely to be a harbinger of how AI will affect other jobs. AI does not have to be great; it just has to be perceived as okay. Then. Bang. Hasta la vista human translators except for certain specialized functions.
- Data like the information in the Bloomberg article provide a handy road map for AI developers. The jobs least affected by AI become targets for entrepreneurs who find that low-hanging fruit like translation have been picked. (Accountants, I surmise, should not relax to much.)
Net net: The wage suppression angle and the incremental adoption of AI followed by quick adoption are important ideas to consider when analyzing the economic ripples of AI.
Stephen E Arnold, April 1, 2024
AI Proofing Tools in Higher Education Limbo
March 26, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Where is the line between AI-assisted plagiarism and a mere proofreading tool? That is something universities really should have decided by now. Those that have not risk appearing hypocritical and unjust. For example, the University of North Georgia (UNG) specifically recommends students use Grammarly to help proofread their papers. And yet, as News Nation reports, a “Student Fights AI Cheating Allegations for Using Grammarly” at that school.
The trouble began when Marley Stevens’ professor ran her paper through plagiarism-detection software Turnitin, which flagged it for an AI violation. Apparently that (ironically) AI-powered tool did not know Grammarly was on the university’s “nice” list. But surely the charge of cheating was reversed once human administrators got involved, right? Nope. Writer Damita Memezes tells us:
“‘I’m on probation until February 16 of next year. And this started when he sent me the email. It was October. I didn’t think that now in March of 2024, that this would still be a big thing that was going on,’ Stevens said. Despite Grammarly being recommended on the University of North Georgia’s website, Stevens found herself embroiled in battle to clear her name. The tool, briefly removed from the school’s website, later resurfaced, adding to the confusion surrounding its acceptable usage despite the software’s utilization of generative AI. ‘I have a teacher this semester who told me in an email like “yes use Grammarly. It’s a great tool.” And they advertise it,’ Stevens said. … Despite Stevens’ appeal and subsequent GoFundMe campaign to rectify the situation, her options seem limited. The university’s stance, citing the absence of suspension or expulsion, has left her in a bureaucratic bind.”
Grammarly’s Jenny Maxwell defends the tool and emphasizes her company’s transparency around its generative components. She suggests colleges and universities update their assessment methods to address evolving tech like Grammarly. For good measure, we would add Microsoft Word’s Copilot and Google Chrome’s "help me write" feature. Shouldn’t schools be training students in the responsible use of today’s technology? According to UNG, yes. And also, no.
This means that if you use Word and its smart software, you may be a cheater. No need to wait until you go to work at a blue chip consulting firm. You are working on your basic consulting skills.
Cynthia Murrell, March 26, 2024
Can Ma Bell Boogie?
March 25, 2024
This essay is the work of a dumb dinobaby. No smart software required.
AT&T provides numerous communication and information services to the US government and companies. People see the blue and white trucks with obligatory orange cones and think nothing about their presence. Decades after Judge Green rained on the AT&T monopoly parade, the company has regained some of its market chutzpah. The old-line Bell heads knew that would happen. One reason was the simple fact that communications services have a tendency to pool; that is, online, for instance, wants to be a monopoly. Like water, online and communication services seek the lowest level. One can grouse about a leaking basement, but one is complaining about a basic fact. Complain away, but the water pools. Similarly AT&T benefits and knows how to make the best of this pooling, consolidating, and collecting reality.
I do miss the “old” AT&T. Say what you will about today’s destabilizing communications environment, just don’t forget that the pre-Judge Green world produced useful innovations, provided hardware that worked, and made it possible for some government functions to work much better than those operations perform today.
Thanks, MSFT, it seems you understand ageing companies which struggle in the midst of the cyber whippersnappers.
But what’s happened?
In February 2024, AT&T experienced an outage. The redundant, fail-safe, state-of-the-art infrastructure failed. “AT&T Cellular Service Restored after Daylong Outage; Cause Still Unknown” reported:
AT&T said late Thursday [February 24, 2024] that based on an initial review, the outage was “caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack.” The company will continue to assess the outage.
What do we publicly know about this remarkable event a month ago? Not much. I am not going to speculate how a single misstep can knock out AT&T, but it raises some questions about AT&T’s procedures, its security, and, yes, its technical competence. The AT&T Ashburn data center is an interesting cluster of facilities. Could it be “knocked offline”? My concern is that the answer to this question is, “You bet your bippy it could.”
A second interesting event surfaced as well. AT&T suffered a mysterious breach which appears to have compromised data about millions of “customers.” And “AT&T Won’t Say How Its Customers’ Data Spilled Online.” Here’s a statement from the report of the breach:
When reached for comment, AT&T spokesperson Stephen Stokes told TechCrunch in a statement: “We have no indications of a compromise of our systems. We determined in 2021 that the information offered on this online forum did not appear to have come from our systems. This appears to be the same dataset that has been recycled several times on this forum.”
Leaked data are no big deal and the incident remains unexplained. The AT&T system went down essential at one fell swoop. Plus there is no explanation which resonates with my understanding of the Bell “way.”
Some questions:
- What has AT&T accomplished by its lack of public transparency?
- Has the company lost its ability to manage a large, dynamic system due to cost cutting?
- Is a lack of training and perhaps capable staff undermining what I think of as “mission critical capabilities” for business and government entities?
- What are US regulatory authorities doing to address what is, in my opinion, a threat to the economy of the US and the country’s national security?
Couple the AT&T events with emerging technology like artificial intelligence, will the company make appropriate decisions or create vulnerabilities typically associated with a dominant software company?
Not a positive set up in my opinion. Ma Bell, are you to old and fat to boogie?
Stephen E Arnold, March 26, 2024