Microsoft and Its Customers: Out of Phase, Orthogonal, and Confused

May 9, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am writing this post using something called Open LiveWriter. I switched when Microsoft updated our Windows machines and killed printing, a mouse linked via a KVM, and the 2012 version of its blog word processing software. I use a number of software products, and I keep old programs in order to compare them to modern options available to a user. The operative point is that a Windows update rendered the 2012 version of LiveWriter lost in the wonderland of Windows’ Byzantine code.

image

A young leader of an important project does not want to hear too much from her followers. In fact, she wishes they would shut up and get with the program. Thank, MSFT Copilot. How’s the Job One of security coming today?

There are reports, which I am not sure I believe, that Windows 11 is a modern version of Windows Vista. The idea is that users are switching to Windows 10. Well, maybe. But the point is that users are not happy with Microsoft’s alleged changes to Windows; for instance:

  1. Notifications (advertising) in the Windows 11 start menu
  2. Alleged telemetry which provides a stream of user action and activity data to Microsoft for analysis (maybe marketing purposes?)
  3. Gratuitous interface changes which range from moving control items from a control panel to a settings panel to fiddling with task manager
  4. Wonky updates like the printer issue, driver wonkiness, and smart help which usually returns nothing of much help.

I read “This Third-Party App Blocks Integrated Windows 11 Advertising.” You can read the original article  to track down this customization tool. My hunch is that its functions will be intentionally blocked by some bonus centric Softie or a change to the basic Windows 11 control panel will cause the software to perform like LiveWriter 2012.

I want to focus on a comment to the cited article written by seeprime:

Microsoft has seriously degraded File Explorer over the years. They should stop prolonging the Gates culture of rewarding software development, of new and shiny things, at the expense of fixing what’s not working optimally.

Now that security, not AI and not Windows 11, are the top priority at Microsoft, will the company remediate the grouses users have about the product? My answer is, “No.” Here’s why:

  1. Fixing, as seeprime, suggests is less important that coming up with some that seems “new.” The approach is dangerous because the “new” thing may be developed by someone uninformed about the hidden dependencies within what is code as convoluted as Google’s search plumbing. “New” just breaks the old or the change is something that seems “new” to an intern or an older Softie who just does not care. Good enough is the high bar to clear.
  2. Details are not Microsoft’s core competency. Indeed, unlike Google, Microsoft has many revenue streams, and the attention goes to cooking up new big-money services like a version of Copilot which is not exposed to the Internet for its government customers. The cloud, not Windows, is the future.
  3. Microsoft whether it knows it or not is on the path to virtualize desktop and mobile software. The idea means that Microsoft does not have to put up with developers who make changes Microsoft does not want to work. Putting Windows in the cloud might give Microsoft the total control it desires.
  4. Windows is a security challenge. The thinking may be: “Let’s put Windows in the cloud and lock down security, updates, domain look ups, etc. I would suggest that creating one giant target might introduce some new challenges to the Softie vision.

Speculation aside, Microsoft may be at a point when users become increasingly unhappy. The mobile model, virtualization, and smart interfaces might create tasty options for users in the near future. Microsoft cannot make up its mind about AI. It has the OpenAI deal; it has the Mistral deal; it has its own internal development; and it has Inflection and probably others I don’t know about.

Microsoft cannot make up its mind. Now Microsoft is doing an about face and saying, “Security is Job One.” But there’s the need to make the Azure Cloud grow. Okay, okay, which is it? The answer, I think, is, “We want to do it all. We want everything.”

This might be difficult. Users might just pile up and remain out of phase, orthogonal, and confused. Perhaps I could add angry? Just like LiveWriter: Tossed into the bit trash can.

Stephen E Arnold, May 9. 2024

Buffeting AI: A Dinobaby Is Nervous

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am not sure the “go fast” folks are going to be thrilled with a dinobaby rich guy’s view of smart software. I read “Warren Buffett’s Warning about AI.” The write up included several interesting observations. The only problem is that smart software is out of the bag. Outfits like Meta are pushing the open source AI ball forward. Other outfits are pushing, but Meta has big bucks. Big bucks matter in AI Land.

image

Yes, dinobaby. You are on the right wavelength. Do you think anyone will listen? I don’t. Thanks, MSFT Copilot. Keep up the good work on security.

Let’s look at a handful of statements from the write up and do some observing while some in the Commonwealth of Kentucky recover from the Derby.

First, the oracle of Omaha allegedly said:

“When you think about the potential for scamming people… Scamming has always been part of the American scene. If I was interested in investing in scamming— it’s gonna be the growth industry of all time.”

Mr. Buffet has nailed the scamming angle. I particularly liked the “always.” Imagine a country built upon scamming. That makes one feel warm and fuzzy about America. Imagine how those who are hostile to US interests interpret the comment. Ill will toward the US can now be based on the premise that “scamming has always been part of the American scene.” Trust us? Just ignore the oracle of Omaha? Unlikely.

Second, the wise, frugal icon allegedly communicated that:

the technology would affect “anything that’s labor sensitive” and that for workers it could “create an enormous amount of leisure time.”

What will those individuals do with that “leisure time”? Gobbling down social media? Working on volunteer projects like picking up trash from streets and highways?

The final item I will cite is his 2018 statement:

“Cyber is uncharted territory. It’s going to get worse, not better.”

Is that a bit negative?

Stephen E Arnold, May 7, 2024

Trust the Internet? Sure and the Check Is in the Mail

May 3, 2024

dino-10-19-timeline-333-fix-4_thumbThis essay is the work of a dumb humanoid. No smart software involved.

  

When the Internet became common place in schools, students were taught how to use it as a research tool like encyclopedias and databases. Learning to research is better known as information literacy and it teaches critical evaluation skills. The biggest takeaway from information literacy is to never take anything at face value, especially on the Internet. When I read CIRA and Continuum Loops’ report, “A Trust Layer For The Internet Is Emerging: A 2023 Report,” I had my doubts.

CIRA is the Canadian Internet Registration Authority, a non-profit organization that supposedly builds a trusted Internet. CIRA acknowledges that as a whole the Internet lacks a shared framework and tool sets to make it trustworthy. The non-profit states that there are small, trusted pockets on the Internet, but they sacrifice technical interoperability for security and trust.

CIRA released a report about how people are losing faith in the Internet. According to the report’s executive summary, the number of Canadians who trust the Internet fell from 71% to 57% while the entire world went from 74% to 63%. The report also noted that companies with a high trust rate outperform their competition. Then there’s this paragraph:

“In this report, CIRA and Continuum Loop identify that pairing technical trust (e.g., encryption and signing) and human trust (e.g., governance) enables a trust layer to emerge, allowing the internet community to create trustworthy digital ecosystems and rebuild trust in the internet as a whole. Further, they explore how trust registries help build trust between humans and technology via the systems of records used to help support these digital ecosystems. We’ll also explore the concept of registry of registries (RoR) and how it creates the web of connections required to build an interoperable trust layer for the internet.”

Does anyone else hear the TLA for Whiskey Tango Foxtrot in their head? Trusted registries sound like a sales gimmick to verify web domains. There are trusted resources on the Internet but even those need to be fact checked. The companies that have secure networks are Microsoft, TikTok, Google, Apple, and other big tech, but the only thing that can be trusted about some outfits are the fat bank accounts.

Whitey Grace, May 3, 2024

AI: Strip Mining Life Itself

May 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I may be — like a AI system — hallucinating. I think I am seeing more philosophical essays and medieval ratio recently. A candidate expository writing is “To Understand the Risks Posed by AI, Follow the Money.” After reading the write up, I did not get a sense that the focus was on following the money. Nevertheless, I circled several statements which caught my attention.

Let’s look at these, and you may want to navigate to the original essay to get each statement’s context.

First, the authors focus on what they as academic thinkers call “an extractive business model.” When I saw the term, I thought of the strip mines in Illinois. Giant draglines stripped the earth to expose coal. Once the coal was extracted, the scarred earth was bulldozed into what looked like regular prairie. It was not. Weeds grew. But to get corn or soy beans, the farmer had to spend big bucks to get chemicals and some Fancy Dan equipment to coax the trashed landscape to utility. Nice.

The essay does not make the downside of extractive practices clear. I will. Take a look at a group of teens in a fast food restaurant or at a public event. The group is a consequence of the online environment in which the individual spends hours each day. I am not sure how well the chemicals and equipment used to rehabilitate the strip minded prairie applies to humans, but I assume someone will do a study and report.

image

The second statement warranting a blue exclamation mark is:

Algorithms have become market gatekeepers and value allocators, and are now becoming producers and arbiters of knowledge.

From my perspective, the algorithms are expressions of human intent. The algorithms are not the gatekeepers and allocators. The algorithms express the intent, goals, and desire of the individuals who create them. The “users” knowingly or unknowingly give up certain thought methods and procedures to provide what appears to be something scratches a Maslow’s Hierarchy of Needs’ itch. I think in terms of the medieval Great Chain of Being. The people at the top own the companies. Their instrument of control is their service. The rest of the hierarchy reflects a skewed social order. A fish understands only the environment of the fish bowl. The rest of the “world” is tough to perceive and understand. In short, the fish is trapped. Online users (addicts?) are trapped.

The third statement I marked is:

The limits we place on algorithms and AI models will be instrumental to directing economic activity and human attention towards productive ends.

Okay, who exactly is going to place limits? The farmer who leased his land to the strip mining outfit made a decision. He traded the land for money. Who is to blame? The mining outfit? The farmer? The system which allowed the transaction?

The situation at this moment is that yip yap about open source AI and the other handwaving cannot alter the fact that a handful of large US companies and a number of motivated nation states are going to spend what’s necessary to obtain control.

Net net: Houston, we have a problem. Money buys power. AI is a next generation way to get it.

Stephen E Arnold, May 2, 2024

Using AI But For Avoiding Dumb Stuff One Hopes

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:

I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

image

Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?

The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:

I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.

The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).

I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.

However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.

Several observations are warranted:

  1. For certain types of content, the systems lack sufficient data to know what the heck I am talking about
  2. For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
  3. The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.

Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?

Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.

Stephen E Arnold, May 1, 2024

Big Tech and Their Software: The Tent Pole Problem

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.

4 27 tent collapse

Okay, ChatGPT, good enough.

I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.

Let’s look at the laws explicated in the essay.

The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.

The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.

The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:

Code can be shaped and built upon.

My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers.  Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.

The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:

… we should embrace the malleability of code and avoid redesign processes at all costs!

Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.

Stephen E Arnold, May 1, 2024

One Half of the Sundar & Prabhakar Act Gets Egged: Garrf.

April 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

After I wrote Google Version 2: The Calculating Predator, BearStearns bought the rights to portions of my research and published one of its analyst reports. In that report, a point was made about Google’s research into semantic search. Remember, this was in 2005, long before the AI balloon inflated to the size of Taylor Swift’s piggy bank. My client (whom I am not allowed to name) and I were in the Manhattan BearStearns’ office. We received a call from Prabhakar Raghavan, who was the senior technology something at Yahoo at that time. I knew of Dr. Raghavan because he had been part of the Verity search outfit. On that call, Dr. Raghavan was annoyed that BearStearns suggested Yahoo was behind the eight ball in Web search. We listened, and I pointed out that Yahoo was not matching Google’s patent filing numbers. Although not an indicator of innovation, it is one indicator. The Yahoo race car had sputtered and had lost the search race. I recall one statement Dr. Raghavan uttered, “I can do a better search engine for $300,000 dollars.” Well, I am still waiting. Dr. Raghavan may have an opportunity to find his future elsewhere if he continues to get the type of improvised biographical explosive device shoved under his office door at Google. I want to point out that I thought Dr. Raghavan’s estimate of the cost of search was a hoot. How could he beat that for a joke worthy of Jack Benny?

image

A big dumb bunny gets egged. Thanks, MSFT Copilot. Good enough.

I am referring to “The Man Who Killed Google Search,” written by Edward Zitron. For those to whom Mr. Zitron is not a household name like Febreze air freshener, he is “the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron). I host the Better Offline Podcast, coming to iHeartRadio and everywhere else you find your podcasts February 2024.” For more about Mr. Zitron, navigate to this link. (Yep, it takes quite a while to load, but be patient.)

The main point of the write up is that the McKinsey-experienced Sundar Pichai (the other half of the comedy act) hired the article-writing, Verity-seasoned Dr. Raghavan to help steer the finely-crafted corporate aircraft carrier, USS Google into the Sea of Money. Even though, the duo are not very good at comedy, they are doing a bang up job of making the creaking online advertising machine output big money. If you don’t know how big, just check out the earning for the most recent financial quarter at this link. If you don’t want to wade through Silicon Valley jargon, Google is “a two trillion dollar company.” How do you like that, Mr. and Mrs. Traditional Advertising?

The write up is filled with proper names of Googlers past and present. The point is that the comedy duo dumped some individuals who embraced the ethos of the old, engineering-oriented, relevant search results Google. The vacancies were filled with those who could shove more advertising into what once were clean, reasonably well-lighted places. At the same time, carpetland (my term for the executive corridor down which Messrs. Brin and Page once steered their Segways) elevated above the wonky world of the engineers, the programmers, the Ivory Tower thinker types, and outright wonkiness of the advanced research units. (Yes, there were many at one time.)

Using the thought processes of McKinsey (the opioid idea folks) and the elocutionary skills of Dr. Raghavan, Google search degraded while the money continued to flow. The story presented by Mr. Zitron is interesting. I will leave it to you to internalize it and thank your luck stars you are not given the biographical improvised explosive device as a seat cushion. Yowzah.

Several observations:

  1. I am not sure the Sundar & Prabhakar duo wrote the script for the Death of Google Search. Believe me, there were other folks in Google carpetland aiding the process. How about a baby maker in the legal department as an example of ground principles? What about an attempted suicide by a senior senior senior manager’s squeeze? What about a big time thinker’s untimely demise as a result of narcotics administered by a rental female?
  2. The problems at Google are a result of decades of high school science club members acting out their visions of themselves as masters of the universe and a desire to rig the game so money flowed. Cleverness, cute tricks, and owning the casino and the hotel and the parking lot were part of Google’s version of Hotel California. The business set up was money in, fancy dancing in public, and nerdland inside. Management? Hey, math is hard. Managing is zippo.
  3. The competitive arena was not set up for a disruptor like the Google. I do not want to catalog what the company did to capture what appears to be a very good market position in online advertising. After a quarter century, the idea that Google might be an alleged monopoly is getting some attention. But alleged is one thing; change is another.
  4. The innovator’s dilemma has arrived in the lair of Googzilla. After inventing tensors, OpenAI made something snazzy with them and cut a deal with Microsoft. The result was the AI hyper moment with Google viewed as a loser. Forget the money. Google is not able to respond, some said. Perception is important. The PR gaffe in Paris where Dr. Prabhakar showed off Bard outputting incorrect information; the protests and arrests of staff; and the laundry list of allegations about the company’s business practices in the EU are compounding the one really big problem — Google’s ability to control its costs. Imagine. A corporate grunt sport could be the hidden disease. Is Googzilla clear headed or addled? Time will tell I believe.

Net net: The man who killed Google is just an clueless accomplice, not the wizard with the death ray cooking the goose and its eggs. Ultimately, in my opinion, we have to blame the people who use Google products and services, rely on Google advertising, and trust search results. Okay, Dr. Raghavan, suspended sentence. Now you can go build your $300,000 Web search engine. I will be available to evaluate it as I did Search2, Neeva, and the other attempts to build a better Google. Can you do it? Sure, you will be a Xoogler. Xooglers can do anything. Just look at Mr. Brin’s airship. And that egg will wash off unlike that crazy idea to charge Verity customers for each entry in an index passed for each user’s query. And that’s the joke that’s funnier than the Paris bollocksing of smart software. Taxi meter pricing for an in-house, enterprise search system. That is truly hilarious.

Stephen E Arnold, April 30, 2024

The Google Explains the Future of the Google Cloud: Very Googley, Of Course

April 30, 2024

green-dino_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

At its recent Next 24 conference, Google Cloud and associates shared their visions for the immediate future of AI. Through the event’s obscurely named Session Library, one can watch hundreds of sessions and access resources connected to many more. The idea — if you  have not caught on to the Googley nomenclature — is to make available videos of the talks at the conference. To narrow, one can filter by session category, conference track, learning level, solution, industry, topic of interest, and whether video is available. Keep in mind that the words you (a normal human, I presume) may use to communicate your interest may not be the lingo Googzilla speaks. AI and Machine Learning feature prominently. Other key areas include data and databases, security, development and architecture, productivity, and revenue growth (naturally). There is even a considerable nod to diversity, equity, and inclusion (DEI). Okay, nod, nod.

Here are a few session titles from just the “AI and ML” track to illustrate the scope of this event and the available information:

  • A cybersecurity expert’s guide to securing AI products with Google SAIF
  • AI for banking: Streamline core banking services and personalize customer experiences
  • AI for manufacturing: Enhance productivity and build innovative new business models
  • AI for telecommunications: Transform customer interactions and network operations
  • AI in capital markets: The biggest bets in the industry
  • Accelerate software delivery with Gemini and Code Transformations
  • Revolutionizing healthcare with AI
  • Streamlining access to youth mental health services

It looks like there is something for everybody. We think the titles make reasonably clear the scope and bigness of Google’s aspirations. Nor would we expect less from a $2 trillion outfit based on advertising, would we? Run a query for Code Red or in Google lingo CodeRED, and you will be surprised that the state of emergency, Microsoft is a PR king mentality persists. (Is this the McKinsey way?) Well, not for those employed at McKinsey. Former McKinsey professionals have more latitude in their management methods; for example, emulating high school science club planning techniques. There are no sessions we could spot about Google’s competition. If one is big enough, there is no competition. One of Googzilla’s relatives made a mess of Tokyo real estate largely without lasting consequences.

Cynthia Murrell, April 30, 2024

Right, Professor. No One Is Using AI

April 29, 2024

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Artificial intelligence and algorithms aren’t new buzzwords but they are the favorite technology jargon being tossed around BI and IT water coolers. (Or would it be Zoom conferences these days?) AI has been a part of modern life for years but AI engines are finally “smart enough” to do actual jobs—sort of. There are still big problems with AI, but one expert shares his take on why the technology isn’t being adopted more in the UiPath article: “3 Common Barriers AI Adoption And How To Overcome Them.”

Whenever new technology hits the market, experts write lists about why more companies aren’t implementing it. The first “mistake” is lack of how to adopt AI because they don’t know about all the work processes within their companies. The way to overcome this issue is to take an inventory of the processes and this can be done via data mining. That’s not so simple if a company doesn’t have the software or know-how.

The second “mistake” is lack of expertise about the subject. The cure for this is classes and “active learning.” Isn’t that another term for continuing education? The third “mistake” is lack of trust and risks surrounding AI. Those exist because the technology is new and needs to be tested more before it’s deployed on a mass scale. Smaller companies don’t want to be guinea pigs so they wait until the technology becomes SOP.

AI is another tool that will become as ubiquitous as mobile phones but the expert is correct about this:”

These barriers are significant, but they pale in comparison to the risk of delaying AI adoption. Early adopters are finding new AI use cases and expanding their lead on the competition every day.

There’s lots to do to prepare your organization for this new era, but there’s also plenty of value and advantages waiting for you along your AI adoption journey. Automation can do a lot to help you move forward quickly to capture AI’s value across your organization.”

If your company finds an AI solution that works then that’s wonderful. Automation is part of advancing technology, but AI isn’t ready to be deployed by all companies. If something works for a business and it’s not too archaic than don’t fix what ain’t broke.

But students have figured out how to use AI to deal with certain professors. No, I am not mentioning any names.

Whitey Grace, April 29, 2024

Not Only Those Chasing Tenure Hallucinate, But Some Citations Are Wonky Too

April 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “ChatGPT Hallucinates Fake But Plausible Scientific Citations at a Staggering Rate, Study Finds.” Wow. “Staggering.” The write up asserts:

A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work

In addition to creating non-reproducible research projects, now those “inventing the future” and “training tomorrow’s research leaders” appear to find smart software helpful in cooking up “proof” and “evidence” to help substantiate “original” research. Note: The quotes are for emphasis and added by the Beyond Search editor.

image

Good enough, ChatGPT. Is the researcher from Harvard health?

Research conducted by a Canadian outfit sparked this statement in the article:

…these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.

The student who did the research told PsyPost:

“Hallucinated citations are easy to spot because they often contain real authors, journals, proper issue/volume numbers that match up with the date of publication, and DOIs that appear legitimate. However, when you examine hallucinated citations more closely, you will find that they are referring to work that does not exist.”

The researcher added:

“The degree of hallucination surprised me,” MacDonald told PsyPost. “Almost every single citation had hallucinated elements or were just entirely fake, but ChatGPT would offer summaries of this fake research that was convincing and well worded.”

My thought is that more work is needed to determine the frequency with which AI made up citations appear in papers destined for peer review or personal aggrandizement on services like ArXiv.

Coupled with the excitement of a president departing Stanford University and the hoo hah at Harvard related to “ethics” raises questions about the moral compass used by universities to guide their educational battleships. Now we learn that the professors are using AI and including made up or fake data in their work?

What’s the conclusion?

[a] On the beam and making ethical behavior part of the woodwork

[b] Supporting and rewarding crappy work

[c] Ignoring the reality that the institutions have degraded over time

[d] Scrolling TikTok looking for grant tips.

If you don’t know, ask You.com or a similar free smart service.

Stephen E Arnold, April 26, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta