Academic Excellence: Easy to Say, Tough to Deliver It Seems

February 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

A recent report from Columbia Journalism Review examines “Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena.” Many words from admirals watching the Titanic steam toward the iceberg. The executive summary explains:

“Insufficient attention has also been paid to the implications of the news industry’s dependence on technology companies for AI. Drawing on 134 interviews with news workers at 35 news organizations in the United States, the United Kingdom, and Germany — including outlets such as The Guardian, Bayerischer Rundfunk, the Washington Post, The Sun, and the Financial Times — and 36 international experts from industry, academia, technology, and policy, this report examines the use of AI across editorial, commercial, and technological domains with an eye to the structural implications of AI in news organizations for the public arena. In a second step, it considers how a retooling of the news through AI stands to reinforce news organizations’ existing dependency on the technology sector and the implications of this.”

The first chapter examines how AI is changing news production and distribution. It is divided into three parts: news organizations’ motives for using AI, how they are doing so, and what expectations they have for the technology. Chapter two examines why news organizations now rely on tech companies and what this could mean for the future of news. Here’s a guess: Will any criticism of big tech firms soon fail to see the light of day, perhaps?

See the report (or download the PDF) for all the details. After analyzing the data, author Felix M. Simon hesitates to draw any firm conclusions about the future of AI and news organizations—there are too many factors in flux. For now, the technology is mostly being used to refine existing news practices rather than to transform them altogether. But that could soon change. If it does, public discourse as a whole will shift, too. Simon notes:

“As news organizations get reshaped by AI, so too will the public arena that is so vital to democracy and for which news organizations play a gatekeeper role. Depending on how it is used, AI has the potential to structurally strengthen news organizations’ position as gatekeepers to an information environment that provides ‘people with relatively accurate, accessible, diverse, relevant, and timely independently produced information about public affairs’ which they can use to make decisions about their lives. … This, however, is not a foregone conclusion. Instead, it will depend on decisions made by the set of actors who wield control over the conditions of news work — executives, managers, and journalists, but also increasingly technology companies, regulatory bodies, and the public.”

That is a lot of players. Which ones hold the most power in this equation? Hint: it is not the last entry in the list.

Cynthia Murrell, February 21, 2024

Map Data: USGS Historical Topos

February 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The ESRI blog published “Access Over 181,000 USGS Historical Topographic Maps.” The map outfit teamed with the US Geological Survey to provide access to an additional 1,745 maps. The total maps in the collection is now 181,008.

image

The blog reports:

Esri’s USGS historical topographic map collection contains historical quads (excluding orthophoto quads) dating from 1884 to 2006 with scales ranging from 1:10,000 to 1:250,000. The scanned maps can be used in ArcGIS Pro, ArcGIS Online, and ArcGIS Enterprise. They can also be downloaded as georeferenced TIFs for use in other applications.

These data are useful. Maps can be viewed with ESRI’s online service called the Historical Topo Map Explorer. You can access that online service at this link.

If you are not familiar with historical topos, ESRI states in an ARCGIS post:

The USGS topographic maps were designed to serve as base maps for geologists by defining streams, water bodies, mountains, hills, and valleys. Using contours and other precise symbolization, these maps were drawn accurately, made mathematically correct, and edited carefully. The topographic quadrangles gradually evolved to show the changing landscape of a new nation by adding symbolization for important highways; canals; railroads; and railway stations; wagon roads; and the sites of cities, towns and villages. New and revised quadrangles helped geologists map the mineral fields, and assisted populated places to develop safe and plentiful water supplies and lay out new highways. Primary considerations of the USGS were the permanence of features; map symbolization and legibility; and the overall cost of compiling, editing, printing and distributing the maps to government agencies, industry, and the general public. Due to the longevity and the numerous editions of these maps they now serve new audiences such as historians, genealogists, archeologists, and people who are interested in the historical landscape of the U.S.

This public facing data service is one example of extremely useful information gathered by US government entities can be made more accessible via a public-private relationship. When I served on the board of the US National Technical Information Service, I learned that other useful information is available, just not easily accessible to US citizens.

Good work, ESRI and USGS! Now what about making that volcano data a bit easier to find and access in real time?

Stephen E Arnold, February 20, 2024

An Allocation Society or a Knowledge Value System? Pick One, Please!

February 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I get random inquiries, usually from LinkedIn, asking me about books I would recommend to a younger person trying to [a] create a brand and make oodles of money, [b] generate sales immediately from their unsolicited emails to strangers, and [c] a somewhat limp-wristed attempt to sell me something. I typically recommend a book I learned about when I was giving lectures at the Kansai Institute of Technology and a couple of outfits in Tokyo. The book is the Knowledge Value Revolution written by a former Japanese government professional named Taichi Sakaiya. The subtitle to the book is “A History of the Future.”

So what?

I read an essay titled “The Knowledge Economy Is Over. Welcome to the Allocation Economy.” The thesis of this essay is that Sakaiya’s description of the future is pretty much wacko. Here’s a passage from the essay about the allocation economy:

Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.

image

A world class knowledge surfer now wins gold medals for his ability to surf on the output of smart robots and pervasive machines. Thanks, Google ImageFX. Not funny but good enough, which is the mark of a champion today, isn’t it?

For me, the message is that people want summaries. This individual was a summarizer and, hence, a knowledge worker. With the smart software doing the summarizing, the knowledge worker is kaput. The solution is for the knowledge worker to move up conceptually. The jump is a metaplay. Debaters learn quickly that when an argument is going nowhere, the trick that can deliver a win is to pop up a level. The shift from poverty to a discussion about the disfunction of a city board of advisors is a trick used in places like San Francisco. It does not matter that the problem of messios is not a city government issue. Tents and bench dwellers are the exhaust from a series of larger systems. None can do much about the problem. Therefore, nothing gets done. But for a novice debater unfamiliar with popping up a level or a meta-play, the loss is baffling.

The essay putting Sakaiya in the dumpster is not convincing and it certainly is not going to win a debate between the knowledge value revolution and the allocation economy. The reason strikes me a failure to see that smart software, the present and future dislocations of knowledge workers, and the brave words about becoming a director or editor are evidence that Sakaiya was correct. He wrote in 1985:

If the type of organization typical of industrial society could be said to resemble a symphony orchestra, the organizations typical of the knowledge-value society would be more like the line-up of a jazz band.

The author of the allocation economy does not realize that individuals with expertise are playing a piano or a guitar. Of those who do play, only a tiny fraction (a one percent of the top 10 percent perhaps?) will be able to support themselves. Of those elite individuals, how many Taylor Swifts are making the record companies and motion picture empresarios look really stupid? Two, five, whatever. The point is that the knowledge-value revolution transforms much more than “attention” or “allocation.” Sakaiya, in my opinion, is operating at a sophisticated meta-level. Renaming the plight of people who do menial mental labor does not change a painful fact: Knowledge value means those who have high-value knowledge are going to earn a living. I am not sure what the newly unemployed technology workers, the administrative facilitators, or the cut-loose “real” journalists are going to do to live as their parents did in the good old days.

The allocation essay offers:

AI is cheap enough that tomorrow, everyone will have the chance to be a manager—and that will significantly increase the creative potential of every human being. It will be on our society as a whole to make sure that, with the incredible new tools at our disposal, we bring the rest of the economy along for the ride.

How many jazz musicians can ride on a particular market sector propelled by smart software? How many individuals will enjoy personal and financial success in the AI allocation-centric world? Remember, please, there are about eight billion people in the world? How many Duke Ellingtons and Dave Brubecks were there?

The knowledge value revolution means that the majority of individuals will be excluded from nine to five jobs, significant financial success, and meaningful impact on social institutions. I am not for everyone becoming a surfer on smart software, but if that happens, the future is going to be more like the one Sakaiya outlined, not an allocation-centric operation in my opinion.

Stephen E Arnold, February 20, 2024

Search Is Bad. This Is News?

February 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Everyone is a search expert. More and more “experts” are criticizing “search results.” What is interesting is that the number of gripes continues to go up. At the same time, the number of Web search options is creeping higher as well. My hunch is that really smart venture capitalists “know” there is a money to be made. There was one Google; therefore, another one is lurking under a pile of beer cans in a dorm somewhere.

One Tech Tip: Ready to Go Beyond Google? Here’s How to Use New Generative AI Search Sites” is a “real” news report which explains how to surf on the new ChatGPT-type smart systems. At the same time, the article makes it clear that the Google may have lost its baseball bat on the way to the big game. The irony is that Google has lots of bats and probably owns the baseball stadium, the beer concession, and the teams. Google also owns the information observatory near the sports arena.

The write up reports:

A recent study by German researchers suggests the quality of results from Google, Bing and DuckDuckGo is indeed declining. Google says its results are of significantly better quality than its rivals, citing measurements by third parties.

A classic he said, she said argument. Objective and balanced. But the point is that Google search is getting worse and worse. Bing does not matter because its percentage of the Web search market is low. DuckDuck is a metasearch system like Startpage. I don’t count these as primary search tools; they are utilities for search of other people’s indexes for the most part.

What’s new with the ChatGPT-type systems? Here’s the answer:

Rather than typing in a string of keywords, AI queries should be conversational – for example, “Is Taylor Swift the most successful female musician?” or “Where are some good places to travel in Europe this summer?” Perplexity advises using “everyday, natural language.” Phind says it’s best to ask “full and detailed questions” that start with, say, “what is” or “how to.” If you’re not satisfied with an answer, some sites let you ask follow up questions to zero in on the information needed. Some give suggested or related questions. Microsoft‘s Copilot lets you choose three different chat styles: creative, balanced or precise.

Ah, NLP or natural language processing is the key, not typing key words. I want to add that “not typing” means avoiding when possible Boolean operators which return results in which stings occur. Who wants that? Stupid, right?

There is a downside; for instance:

Some AI chatbots disclose the models that their algorithms have been trained on. Others provide few or no details. The best advice is to try more than one and compare the results, and always double-check sources.

What’s this have to do with Google? Let me highlight several points which make clear how Google remains lost in the retrieval wilderness, leading the following boy scout and girl scout troops into the fog of unknowing:

  1. Google has never revealed what it indexes or when it indexes content. What’s in the “index” and sitting on Google’s servers is unknown except to some working at Google. In fact, the vast majority of Googlers know little about search. The focus is advertising, not information retrieval excellence.
  2. Google has since it was inspired by GoTo, Overture, and Yahoo to get into advertising been on a long, continuous march to monetize that which can be shaped to produce clicks. How far from helpful is Google’s system? Wait until you see AI helping you find a pizza near you.
  3. Google’s bureaucratic methods is what I would call many small rubber boats generally trying to figure out how to get to Advertising Land, but they are caught in a long, difficult storm. The little boats are tough to keep together. How many AI projects are enough? There are never enough.

Net net: The understanding of Web search has been distorted by Google’s observatory. One is looking at information in a Google facility, designed by Googlers, and maintained by Googlers who were not around when the observatory and associated plumbing was constructed. As a result, discussion of search in the context of smart software is distorted.

ChatGPT-type services provide a different entry point to information retrieval. The user still has to figure out what’s right and what’s wonky. No one wants to do that work. Write ups about “new” systems are little more than explanations of why most people will not be able to think about search differently. That observatory is big; it is familiar; and it is owned by Google just like the baseball team, the concessions, and the stadium.

Search means Google. Writing about search means Google. That’s not helpful or maybe it is. I don’t know.

Stephen E Arnold, February 20, 2024

x

x

x

The US Government Needs Its McKinsey Fix

February 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Governments don’t know how to spend their money wisely. Despite all its grandness, the United States has a deficit spending problem. According to Promarket, the US has a spends way too many tax dollars at McKinsey and Company: “Why The US Government Buys Overpriced Services From McKinsey.” McKinsey and Company is a consulting firm that provides organizations and the US government with advice on how to improve operations.

McKinsey is comparable to the IRS conducting a tax audit on the US government. The company is supposed to help the US implement social justice, diverse, and other political jargon into its business practices. The Clinton administration first purchased the over zealous services from McKinsey. Unfortunately McKinsey doesn’t do much other than repackage mediocre advice with an expensive price tag. How much does McKinsey charge for services? It’s a lot:

“Such practices used to be called “honest graft.” And let’s be clear, McKinsey’s services are very expensive. Back in August, I noted that McKinsey’s competitor, the Boston Consulting Group, charges the government $33,063.75/week for the time of a recent college grad to work as a contractor. Not to be outdone, McKinsey’s pricing is much much higher, with one McKinsey “business analyst”—someone with an undergraduate degree and no experience—lent to the government priced out at $56,707/week, or $2,948,764/year.”

McKinsey can charge outrageous prices because the company uses unethical tactics and they can stay because the General Services Administration gets a 0.75% cut of what contractors spend. It is officially called the “Industrial Funding Fee” or IFF. The GSA receives a larger operating budget whenever it outsources to contractors.

Will changes be made for the next fiscal year? Unlikely.

Whitney Grace’s February 20, 2024

Googzilla Takes Another OpenAI Sucker Punch

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In January 2023, the savvy Googlers woke up to news that Microsoft and OpenAI had seized the initiative in smart software. One can argue the technical merits, but from a PR and marketing angle, the Softies and Sam AI-Man crept upon the World Economic Forum and clubbed the self-confident Googzilla in the cervical spine. The Google did not see that coming.

The somewhat quirky OpenAI has done it again. This time the blow was delivered with a kin geri or, more colloquially, a groin kick. How did Sam AI-Man execute this painful strike? Easy. The company released Sora, a text to video smart software function. “OpenAI’s Sora Generates Photorealistic Videos” reports:

Sora is a generative AI diffusion model. Sora can generate multiple characters, complex backgrounds and realistic-looking movements in videos up to a minute long. It can create multiple shots within one video, keeping the characters and visual style consistent, allowing Sora to be an effective storytelling tool.

Chatter indicates that OpenAI is not releasing a demonstration or a carefully crafted fakey examples. Nope, unlike a certain large outfit with a very big bundle of cash, the OpenAI experts have skipped the demonstrations and gone directly to a release of the service to individuals who will probe the system for safety and good manners.

Could Googzilla be the company which OpenAI intends to drop to its knees? From my vantage point, heck yes. The outputs from the system are not absolutely Hollywood grade, but the examples are interesting and suggest that the Google, when it gets up off the floor, will have to do more.

image

Several observations:

  1. OpenAI is doing a good job with its marketing and PR. Google announces quantum supremacy; OpenAI provides a glimpse of a text to video function which will make game developers, Madison Avenue art history majors, and TikTok pay attention
  2. Google is once again in react mode. I am not sure pumping up the number of tokens in Bard or Gemini or whatever is going to be enough to scrub the Sora and prevent the spread of this digital infection
  3. Googzilla may be like the poor 1950s movie monster who was tamed not by a single blow but by many pesky attacks. I think this approach is called “death by a thousand cuts.”

Net net: OpenAI has pulled up a marketing coup for a second time. Googzilla is ageing, and old often means slow. What is OpenAI’s next marketing play? A Bruce Lee “I am faster than you, big guy” or a Ninja stealth move? Both methods seem to have broken through the GOOG’s defenses.

Stephen E Arnold, February 19, 2024

x

Generative AI and College Application Essays: College Presidents Cheat Too

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The first college application season since ChatGPT hit it big is in full swing. How are admissions departments coping with essays that may or may not have been written with AI? It depends on which college one asks. Forbes describes various policies in, “Did You Use ChatGPT on your School Applications? These Words May Tip Off Admissions.” The paper asked over 20 public and private schools about the issue. Many dared not reveal their practices: as a spokesperson for Emory put it, “it’s too soon for our admissions folks to offer any clear observations.” But the academic calendar will not wait for clarity, so schools must navigate these murky waters as best they can.

Reporters Rashi Shrivastava and Alexandra S. Levine describe the responses they did receive. From “zero tolerance” policies to a little wiggle room, approaches vary widely. Though most refused to reveal whether they use AI detection software, a few specified they do not. A wise choice at this early stage. See the article for details from school to school.

Shrivastava and Levine share a few words considered most suspicious: Tapestry. Beacon. Comprehensive curriculum. Esteemed faculty. Vibrant academic community. Gee, I think I used a one or two of those on my college essays, and I wrote them before the World Wide Web even existed. On a typewriter. (Yes, I am ancient.) Will earnest, if unoriginal, students who never touched AI get caught up in the dragnets? At least one admissions official seems confident they can tell the difference. We learn:

“Ben Toll, the dean of undergraduate admissions at George Washington University, explained just how easy it is for admissions officers to sniff out AI-written applications. ‘When you’ve read thousands of essays over the years, AI-influenced essays stick out,’ Toll told Forbes. ‘They may not raise flags to the casual reader, but from the standpoint of an admissions application review, they are often ineffective and a missed opportunity by the student.’ In fact, GWU’s admissions staff trained this year on sample essays that included one penned with the assistance of ChatGPT, Toll said—and it took less than a minute for a committee member to spot it. The words were ‘thin, hollow, and flat,’ he said. ‘While the essay filled the page and responded to the prompt, it didn’t give the admissions team any information to help move the application towards an admit decision.’”

That may be the key point here—even if an admissions worker fails to catch an AI-generated essay, they may reject it for being just plain bad. Students would be wise to write their own essays rather than leave their fates in algorithmic hands. As Toll put it:

“By the time a student is filling out their application, most of the materials will have already been solidified. The applicants can’t change their grades. They can’t go back in time and change the activities they’ve been involved in. But the essay is the one place they remain in control until the minute they press submit on the application. I want students to understand how much we value getting to know them through their writing and how tools like generative AI end up stripping their voice from their admission application.”

Disqualified or underwhelming—either way, relying on AI to write one’s application essay could spell rejection. Best to buckle down and write it the old-fashioned way. (But one can skip the typewriter.)

Cynthia Murrell, February 19, 2024

Relevance: Rest in Peace

February 16, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

It is Friday, and I am tired of looking at computer generated news with lines like “Insert paragraphs here”. No, don’t bother. The issues I am experiencing with SmartNews and Flipboard are more than annoyances. These, like other aggregation services, are becoming less productive than reading random Reddit posts or the information posted on Blackmagic forum boards. Everyone is trying to find a way to make a buck before the bank account says, “Yo, transaction denied.”

image

Marketers will find that buying traffic enables many opportunities. Thanks MSFT Copilot whatever. Good enough.

I read “Meta Is Passing on the Apple Tax for Boosted Posts to Advertisers.” What’s the big point in the pontificating online service? How about this passage:

Meta says those who want to boost posts through its iOS apps will now need to add prepaid funds and pay for them before their boosted posts are published. Meta will charge an extra 30 percent to cover Apple’s transaction fee for preloading funds in iOS as well.

My interpretation is: If you want traffic, you will pay for it. And you will pay other fees as well. And if you don’t like it, give those free press release services a whirl.

So what?

  1. The pay-for-traffic model is now the best and fastest way to get traffic or clicks. Free rides, I think, have been replaced with tolls.
  2. Next up will be subscriptions to those who want traffic. Just pay a lump sum and you will get traffic. The traffic may be worthless, but for those who like to play roulette, you may get a winner. Remember the house owns zero and double zero plus whatever you lose. Great deal, right?
  3. The popular click is likely to be shaped, weaponized, or disinformationized.

Net net: Relevance will be quite difficult to define outside of a transactional relationship. Will this matter? Nope because most users accept what a service returns as relevant, accurate, and reliable.

Stephen E Arnold, February 16, 2024

Interesting Observations: Do These Apply to Technology Is a Problem Solver Thinking?

February 16, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay by Nat Eliason, an entity unknown previously to me. “A Map Is Not a Blueprint: Why Fixing Nature Fails.” is a short collection of the way human thought processes create some quite spectacular problems. His examples include weight loss compounds like Ozempic, transfats, and the once-trendy solution to mental issues, the lobotomy.

Humans can generate a map of a “territory” or a problem space. Then humans dig in and try to make sense of their representation. The problem is that humans may approach a problem and get the solution wrong. No surprise there. One of the engines of innovation is coming up with a solution to a problem created by something incorrectly interpreted. A recent example is the befuddlement of Mark Zuckerberg when a member of the Senate committee questioning him about his company suggested that the quite wealthy entrepreneur had blood on his hands. No wonder he apologized for creating a service that has the remarkable power of bringing people closer together, well, sometimes.

image

Immature home economics students can apologize for a cooking disaster. Techno feudalists may have a more difficult time making amends. But there are lawyers and lobbyists ready and willing to lend a hand. Thanks, MSFT Copilot Bing thing. Good enough.

What I found interesting in Mr. Eliason’s essay was the model or mental road map humans create (consciously or unconsciously) to solve a problem. I am thinking in terms of social media, AI generated results for a peer-reviewed paper, and Web search systems which filter information to generate a pre-designed frame for certain topics.

Here’s the list of the five steps in the process creating interesting challenges for those engaged in and affected by technology today:

  1. Smart people see a problem, study it, and identify options for responding.
  2. The operations are analyzed and then boiled down to potential remediations.
  3. “Using our map of the process we create a solution to the problem.”
  4. The solution works. The downstream issues are not identified or anticipated in a thorough manner.
  5. New problems emerge as a consequence of having a lousy mental map of the original problem.

Interesting. Creating a solution to a technology-sparked problem without consequences may be one key to success. “I had no idea” or “I’m a sorry” makes everything better.

Stephen E Arnold, February 16, 2024

Embrace Good Enough … or Less Than Good. Either Way Is Okay Today

February 16, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

As humans we want to be the best individual that we can be. We especially think about how to improve ourselves and examine our flaws during the New Year. Sophie McBain from The Guardian evaluated different approaches to life in the article, “The Big Idea: Is Being ‘Good Enough’ Better Than Perfection?” McBain discusses the differences between people who are fine with the “good enough” vs. perfection mentality.

image

A high school teacher admires a student who built an innovative chair desk. Yep, MSFT Copilot. Good enough.

She uses Internet shopping to explain the differences between the two personality types. Perfectionists aka “maximizers” want to achieve the best of everything. It’s why they search for the perfect item online reading “best of…” lists and product reviews. This group spends hours finding the best items.“Good enough” people aka “satisfiers” review the same information but in lesser amounts and quickly make a decision.

Maximizers do better professionally, but they’re less happy in their personal lives. Satisfiers are happier because they use their time to pursue activities that make them happy. The Internet blasting ideal life styles also contributes to depressive outlooks:

“In his 2022 book, The Good-Enough Life, Avram Alpert argues that personal quests for greatness, and the unequal social systems that fuel these quests, are at the heart of much that is wrong in the world, driving overconsumption and environmental degradation, stark inequalities and increased unhappiness among people who feel locked in endless competition with one another. Instead of scrambling for a handful of places at the top, Alpert believes we’d all be better off dismantling these hierarchies, so that we no longer cultivate our talents to pursue wealth, fame or power, but only to enrich our own lives and those of others.”

McBain finishes her article by encouraging people to examine their life through a “good enough” lens. It’s a kind sentiment to share at the start of a New Year but it also encourages people to settle. If people aren’t happy with their life choice, they should critically evaluate them and tackle solutions. “Good enough” is great for unimportant tasks but “maximizing” potential for a better future is a healthier outlook.

Whitney Grace, February 16, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta