Academics and Ethics: We Can Make It Up, Right?

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Bogus academic studies were already a troubling issue. Now generative text and image algorithms are turbocharging the problem. Nature describes how in, “AI Intensifies Fight Against ‘Paper Mills” that Churn Out Fake Research.” Writer Layal Liverpool states:

“Generative AI tools, including chatbots such as ChatGPT and image-generating software, provide new ways of producing paper-mill content, which could prove particularly difficult to detect. These were among the challenges discussed by research-integrity experts at a summit on 24 May, which focused on the paper-mill problem. ‘The capacity of paper mills to generate increasingly plausible raw data is just going to be skyrocketing with AI,’ says Jennifer Byrne, a molecular biologist and publication-integrity researcher at New South Wales Health Pathology and the University of Sydney in Australia. ‘I have seen fake microscopy images that were just generated by AI,’ says Jana Christopher, an image-data-integrity analyst at the publisher FEBS Press in Heidelberg, Germany. But being able to prove beyond suspicion that images are AI-generated remains a challenge, she says. Language-generating AI tools such as ChatGPT pose a similar problem. ‘As soon as you have something that can show that something’s generated by ChatGPT, there’ll be some other tool to scramble that,’ says Christopher.”

Researchers and integrity analysts at the summit brainstormed ideas to combat the growing problem and plan to publish an action plan “soon.” In a related issue, attendees agreed AI can be a legitimate writing aid but considered certain requirements, like watermarking AI-generated text and providing access to raw data.

7 23 make up data

Post-docs and graduate students make up data. MidJourney captures the camaraderie of 21st-century whiz kids rather well. A shared experience is meaningful.

Naturally, such decrees would take time to implement. Meanwhile, readers of academic journals should up their levels of skepticism considerably.

But tenure and grant money are more important than — what’s that concept? — ethical behavior for some.

Cynthia Murrell, July 4, 2023

NSO Group Restructuring Keeps Pegasus Aloft

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The NSO Group has been under fire from critics for the continuing deployment if its infamous Pegasus spyware. The company, however, might more resemble a different mythological creature: Since its creditors pulled their support, NSO appears to be rising from the ashes.

7 2 pegasus aloft

Pegasus continues to fly. Can it monitor some of the people who have mobile phones? Not in ancient Greece. Other places? I don’t know. MidJourney’s creative powers does not shed light on this question.

The Register reports, “Pegasus-Pusher NSO Gets New Owner Keen on the Commercial Spyware Biz.” Reporter Jessica Lyons Hardcastle writes:

“Spyware maker NSO Group has a new ringleader, as the notorious biz seeks to revamp its image amid new reports that the company’s Pegasus malware is targeting yet more human rights advocates and journalists. Once installed on a victim’s device, Pegasus can, among other things, secretly snoop on that person’s calls, messages, and other activities, and access their phone’s camera without permission. This has led to government sanctions against NSO and a massive lawsuit from Meta, which the Supreme Court allowed to proceed in January. The Israeli company’s creditors, Credit Suisse and Senate Investment Group, foreclosed on NSO earlier this year, according to the Wall Street Journal, which broke that story the other day. Essentially, we’re told, NSO’s lenders forced the biz into a restructure and change of ownership after it ran into various government ban lists and ensuing financial difficulties. The new owner is a Luxembourg-based holding firm called Dufresne Holdings controlled by NSO co-founder Omri Lavie, according to the newspaper report. Corporate filings now list Dufresne Holdings as the sole shareholder of NSO parent company NorthPole.”

President Biden’s executive order notwithstanding, Hardcastle notes governments’ responses to spyware have been tepid at best. For example, she tells us, the EU opened an inquiry after spyware was found on phones associated with politicians, government officials, and civil society groups. The result? The launch of an organization to study the issue. Ah, bureaucracy! Meanwhile, Pegasus continues to soar.

Cynthia Murrell, July 4, 2023

Crackdown on Fake Reviews: That Is a Hoot!

July 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “The FTC Wants to Put a Ban on Fake Reviews.” My first reaction was, “Shouldn’t the ever-so-confident Verge poobah have insisted on the word “impose”; specifically, The FTC wants to impose a ban on a fake reviews” or maybe “The FTC wants to rein in fake reviews”? But who cares? The Verge is the digital New York Times and go-to source of “real” Silicon Valley type news.

The write up states:

If you, too, are so very tired of not knowing which reviews to trust on the internet, we may eventually get some peace of mind. That’s because the Federal Trade Commission now wants to penalize companies for engaging in shady review practices. Under the terms of a new rule proposed by the FTC, businesses could face fines for buying fake reviews — to the tune of up to $50,000 for each time a customer sees one.

For more than 30 years, I worked with an individual named Robert David Steele, who was an interesting figure in the intelligence world. He wrote and posted on Amazon more than 5,000 reviews. He wrote these himself, often in down times with me between meetings. At breakfast one morning in the Hague, Steele was writing at the breakfast table, and he knocked over his orange juice. He said, “Give me your napkin.” He used it to jot down a note; I sopped up the orange juice.

7 2 man laughing

“That’s a hoot,” says a person who wrote a product review to make a competitor’s offering look bad. A $50,000 fine. Legal eagles take flight. The laughing man is an image flowing from the creative engine at MidJourney.

 

He wrote what I call humanoid reviews.

Now reviews of any type are readily available. Here’s an example from Fiverr.com, an Israel-based outfit with gig workers from many countries and free time on their hands:

image

How many of these reviews will be written by a humanoid? How many will be spat out via a ChatGPT-type system?

What about reviews written by someone with a bone to pick? The reviews are shaded so that the product or the book or whatever is presented in a questionable way? Did Mr. Steele write a review of an intelligence-related book and point out that the author was misinformed about the “real” intel world?

Several observations:

  1. Who or what is going to identify fake reviews?
  2. What’s the difference between a Fiverr-type review and a review written by a humanoid motivated by doing good or making the author or product look bad?
  3. As machine-generated text improves, how will software written to identify machine-generated reviews keep up with advances in the machine-generating software itself?

Net net: External editorial and ethical controls may be impractical. In my opinion, a failure of ethical controls within social structures creates a greenhouse in which fakery, baloney, misinformation, and corrupted content to thrive. In this context, who cares about the headline. It too is a reflection of the pickle barrel in which we soak.

Stephen E Arnold, July 3, 2023

Google: Is the Company Engaging in F-U-D?

July 3, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

When I was a wee sprout in 1963, I was asked to attend an IBM presentation at the so-so university I attended. Because I was a late-night baby-sitter for the school’s big, hot, and unreliable mainframe, a full day lecture and a free lunch. Of course, I went. I remember one thing more than a half century later. The other attendees from my college were using a word I was hearing but interpreting reasonably well.

7 1 google fud

The artistic MidJourney presents an picture showing executives struggling to process Google’s smart software announcements about the future. One seems to be wondering, “These are the quantum supremacy people. They revolutionized protein folding. Now they want us to wait while our competitors are deploying ChatGPT based services? F-U-D that!”

The word was F-U-D. To make sure I wasn’t confusing the word with a popular epithet, I asked one of the people who worked in the computer center as a supervisor (actually an underpaid graduate student) but superior to my $3 per hour wage, what’s F-U-D.

The fellow explained, “It means fear, uncertainty, and doubt. The idea is that IBM wants us to be afraid of buying something from Burroughs or National Cash Register. The uncertainty means that we have to make sure the competitors’ computers are as good as the IBM machines. And the doubt means that if we buy a Control Data system, we can be fired if it isn’t IBM.”

Yep, F-U-D. The game plan designed to make people like me cautious about anything not embraced by administrators. New things had to be kept in a sandbox. Really new things had to be part of a Federal research grant which could blow up and destroy a less-than-brilliant researcher’s career but cause no ripple in carpetland.

Why am I thinking about F-U-D?

I read “Here’s Why Google Thinks Its Gemini AI Will Surpass ChatGPT.” The write up makes clear:

“At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models,” Hassabis told Wired. “We also have some new innovations that are going to be pretty interesting.”

I interpreted this comment in this way:

  1. Be patient, Google has better, faster, cheaper, more wonderful technology for you coming soon, really soon
  2. Google is creating better AI because we are combining great technology with the open source systems and methods we made available to losers like OpenAI
  3. Google is innovative. (Remember, please, that Google equates innovation with complexity.)

Net net: By Gemini, just slow down. Wait for us. We are THE Google, and we do F-U-D.

Stephen E Arnold, July 3, 2023

Wanna Be an MBA? You Got It and for Only $45US

June 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I managed to eek out of college as an ABD or All But Dissertation. (How useful would it be for me to write 200 pages about Chaucer’s alleged knowledge of the thrilling Apocrypha?) So no MBA or Doctor of Business Administration or the more lofty PhD in Finance. I am a 78 year old humanoid proudly representing the dull-normal in my cohort.

6 24 college grad

“So you got your MBA from which school?” asks the human people manager. The interviewee says, “I got it from an online course.” “Do you have student loans?” queries the interviewer. “Nah, the degree equivalent cost me about $50,” explains the graduate. “Where did you get the tassel and robe?” probes the keen eyed interviewer at blue chip consulting firm. The motivated MBA offers, “At the Goodwill store.” The image is the MFA grade output from MidJourney.

But you — yes, you, gentle reader — can do better. You can become a Master of Business Administration. You will be wined (or it that whined) and dined by blue chip consulting firms. You can teach as a prestigious adjunct professor before you work at Wal-Mart or tutor high school kids in math. You will be an MBA, laboring at one of those ethics factories more commonly known as venture capital firms. Imagine.

How can this be yours? Just pony up $45US and study MBA topics on your own. “This MBA Training Course Bundle Is 87% Off Right Now.” The article breathlessly explains:

The courses are for beginners and require no previous experience with the business world. Pick and choose which courses you want to complete, or take the whole package to maximize your knowledge. Work through materials at your own pace (since you have lifetime access) right on your mobile or desktop device.

There is an unfortunate disclaimer; to wit:

This course bundle will not replace a formal MBA degree—but it can get you some prior knowledge before pursuing one or give you certificates to include on your resume. Or, if you’re an aspiring entrepreneur, you may just be searching for some tips from experts.

A quick visit to a Web search system for “cheap online PhD” can convert that MBA learning into even more exciting job prospects.

The Beyond Search goose says, “Act now and become an eagle. Unlike me a silly goose.”

Stephen E Arnold, June 30, 2023

Accuracy: AI Struggles with the Concept

June 30, 2023

For those who find reading and understanding research papers daunting, algorithms can help. At least according to the write-up, “5 AI Tools for Summarizing a Research Paper” at Cointelegraph. Writer Alice Ivey emphasizes research articles can be full of jargon, complex ideas, and technical descriptions, making them tricky for anyone outside the researchers’ field. It is AI to the rescue! That is, as long as you don’t mind summaries that contain a few errors. We learn:

“Artificial intelligence (AI)-powered tools that provide support for tackling the complexity of reading research papers can be used to solve this complexity. They can produce succinct summaries, make the language simpler, provide contextualization, extract pertinent data, and provide answers to certain questions. By leveraging these tools, researchers can save time and enhance their understanding of complex papers.

But it’s crucial to keep in mind that AI tools should support human analysis and critical thinking rather than substitute for them. In order to ensure the correctness and reliability of the data collected from research publications, researchers should exercise caution and use their domain experience to check and analyze the outputs generated by AI techniques. … It’s crucial to keep in mind that AI tools may not always accurately capture the context of the original publication, even though they can help summarize research papers.”

So, one must be familiar with the area of study to judge whether the AI got it right. Doesn’t that defeat the purpose? One can imagine scenarios where relying on misinformation could have serious consequences. Or at least some embarrassment.

The article lists ChatGPT, QuillBot, SciSpacy, IBM Watson Discovery, and Semantic Scholar as our handy but potentially inaccurate AI explainers. Some readers may possess the knowledge needed to recognize a faulty summary and think such tools may at least save them a bit of time. It would be nice to know how much one would pay for that convenience, but that small detail is missing from the write-up. ChatGPT, for example, is $240 per year. It might be more cost effective to just read the articles for oneself.

Cynthia Murrell, June 30, 2023

Databricks: Signal to MBAs and Data Wranglers That Is Tough to Ignore

June 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do you remember the black and white pictures of the Pullman riots? No, okay. Steel worker strikes in Pittsburgh? No. Scuffling outside of Detroit auto plants? No. Those images may be helpful to get a sense of what newly disenfranchised MBAs and data wranglers will be doing in the weeks and months ahead.

Databricks Revolutionizes Business Data Analysis with AI Assistant” explains that the Databricks smart software

interprets the query, retrieves the relevant data, reads and analyzes it, and produces meaningful answers. This groundbreaking approach eliminates the need for specialized technical knowledge, democratizing data analysis and making it accessible to a wider range of users within an organization. One of the key advantages of Databricks’ AI assistant is its ability to be trained on a company’s own data. Unlike generic AI systems that rely on data from the internet, LakehouseIQ quickly adapts to the specific nuances of a company’s operations, such as fiscal year dates and industry-specific jargon. By training the AI on the customer’s specific data, Databricks ensures that the system truly understands the domain in which it operates.

6 29 angry analysts

MidJourney has delivered an interesting image (completely original, of course) depicting angry MBAs and data wranglers massing in Midtown and preparing to storm one of the quasi monopolies which care about their users, employees, the environment, and bunny rabbits. Will these professionals react like those in other management-labor dust ups?

Databricks appears to be one of the outfits applying smart software to reduce or eliminate professional white collar work done by those who buy $7 lattes, wear designer T shirts, and don wonky sneakers for important professional meetings.

 

The DEO of Databricks (a data management and analytics firm) says:

By training their AI assistant on the customer’s specific data, Databricks ensures that it comprehends the jargon and intricacies of the customer’s industry, leading to more accurate and insightful analysis.

My interpretation of the article is simple: If the Databricks’ system works, the MBA and data wranglers will be out of a job. Furthermore, my view is that if systems like Databricks works as advertised, the shift from expensive and unreliable humans will not be gradual. Think phase change. One moment you have a solid and then you have plasma. Hot plasma can vaporize organic compounds in some circumstances. Maybe MBAs and data wranglers are impervious? On the other hand, maybe not.

Stephen E Arnold, June 29, 2023

Microsoft: A Faint Signal from Employees or Just Noise from Gousers?

June 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I spotted this story in my newsfeed this morning: “Leaked Internal Microsoft Poll Shows Fewer Employees Have Confident in Leadership and Gave the Worst Score to a Question about Whether Working There Is a Good Deal.”

My yellow lights began to flash. I have no way of knowing if the data were compiled in a rigorous, Statistics 101 manner. I have no way of determining if the data were just made up the way a certain big wheel at Stanford University handled “real” data. I have no way of knowing if the  write up and the facts were a hallucination generated by a “good enough” Microsoft Edge smart output.

Nevertheless, I found the write up amusing.

Consider this passage:

The question about confidence in leaders got an average of 73% favorable responses across the company in this year’s poll compared to 78% in last year’s, according to results viewed by Insider.

I think that means the game play, the Solarwinds’ continuing siroc, and the craziness of moments (if this does not resonate, don’t ask).

Let’s assume that the data are faked or misstated. The question which arises is here in Harrod’s Creek, Kentucky, is: Why now?

Stephen E Arnold, June 29, 2023

Annoying Humans Bedevil Smart Software

June 29, 2023

Humans are inherently biased. While sexist, ethnic, and socioeconomic prejudices are implied as the cause behind biases, unconscious obliviousness is more likely to be the culprit. Whatever causes us to be biased, AI developers are unfortunately teaching AI algorithms our fallacies. Bloomberg investigates how AI is being taught bad habits in the article, “Humans Are Biased, Generative AI Is Even Worse.”

Stable Diffusion is one of the may AI bots that generates images from text prompts. Based on these prompts, it delivers images that display an inherent bias in favor of white men and discriminates against women and brown-skinned people. Using Stable Diffusion, Bloomber conducted a test of 5000 AI images They were analyzed and found that Stable Diffusion is more racist and sexist than real-life.

While Stable Diffusion and other text-to-image AI are entertaining, they are already employed by politicians and corporations. AI-generated images and videos set a dangerous precedent, because it allows bad actors to propagate false information ranging from conspiracy theories to harmful ideologies. Ethical advocates, politicians, and some AI leaders are lobbying for moral guidelines, but a majority of tech leaders and politicians are not concerned:

“Industry researchers have been ringing the alarm for years on the risk of bias being baked into advanced AI models, and now EU lawmakers are considering proposals for safeguards to address some of these issues. Last month, the US Senate held a hearing with panelists including OpenAI CEO Sam Altman that discussed the risks of AI and the need for regulation. More than 31,000 people, including SpaceX CEO Elon Musk and Apple co-founder Steve Wozniak, have signed a petition posted in March calling for a six-month pause in AI research and development to answer questions around regulation and ethics. (Less than a month later, Musk announced he would launch a new AI chatbot.) A spate of corporate layoffs and organizational changes this year affecting AI ethicists may signal that tech companies are becoming less concerned about these risks as competition to launch real products intensifies.”

Biased datasets for AI are not new. AI developers must create more diverse and “clean” data that incorporates a true, real-life depiction. The answer may be synthetic data; that is, human involvement is minimized — except when the system has been set up.

Whitney Grace, June 29, 2023

Google: Users and Its Ad Construction

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the last 48 hours, I have heard or learned about some fresh opinions about Alphabet / Google / YouTube (hereinafter AGY). Google Glass III (don’t forget the commercial version, please) has been killed. Augmented Reality? Not for the Google. Also, AGY continues to output promises about its next Bard. Is it really better than ChatGPT? And AGY is back in the games business. (Keep in mind that Google pitched Yahoo with a games deal in 2004 if I remember correctly and then flamed out with its underwhelming online game play a decade later which was followed by the somewhat forgettable Stadia game service. ) Finally, a person told me that Prabhakar Raghavan allegedly said, “We want our customers to be happy.” Inspirational indeed. I think I hit the highlights from the information I encountered since Monday, June 25, 2023.

6 28 bad foundation

The ever sensitive creator MidJourney provided this illustration of a structure with a questionable foundation. Could the construct lose a piece here and piece there until it must be dismantled to save the snail darters living in the dormers? Are the residents aware of the issue?

The fountain of Googliness seems to be copious. I read “Google Ads Can Do More for Its Customers.” The main point of the article is that:

Google’s dominance in the search engine industry, particularly in search ads, is unparalleled, making it virtually the only viable option for advertisers seeking to target search traffic. It’s a conflict of interest, as Google’s profitability is closely tied to ad revenue. As Google doesn’t do enough to make Google Ads a more transparent platform and reduce the cost for its customers, advertisers face inflated costs and fierce competition, making it challenging for smaller businesses with limited budgets to compete effectively.

Gulp. If I understand this statement, Google is exploiting its customers. Remember. These are the entities providing the money to fund AGY’s numerous administrative costs. These are going just one way: Up and up. Imagine the data center, legal fines, and litigation costs. Big numbers before adding in salaries and bonuses.

Observations:

  1. Structural weakness can be ignored until the edifice just collapses.
  2. Unhappy customers might want to drop by for a conversation and the additional weight of these humanoids may cross a tipping point.
  3. US regulators may ignore AGY, but government officials in other countries may not.

Bud Light’s adventures with its customers provide a useful glimpse of that those who are unhappy can do and do quickly. The former Bud Light marketing whiz has a degree from Harvard. Perhaps this individual can tackle the AGY brand? Just a thought.

Stephen E Arnold, June 28, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta