College Student Builds App To Detect AI Written Essays: Will It Work? Sure

January 19, 2023

Artists are worried that AI algorithms will steal their jobs, but now writers are in the same boat because the same thing is happening to them! AI are now competent enough to write coherent text. Algorithms can now write simple conversations, short movie scripts, flash fiction, and even assist in the writing process. Students are also excited about the prospect of AI writing algorithms, because it means they can finally outsource their homework to computers. Or they could have done that until someone was genius enough to design an AI that detected AI-generated essays. Business Insider reports on how a college student is now the bane of the global student body: “A Princeton Student Built An App Which Can Detect If ChatGPT Wrote An Essay To Combat AI-Based Plagiarism.”

Princeton computer science major Edward Tian spent his winter holiday designing an algorithm to detect if an essay was written by the new AI writer ChatGPT. Dubbed GPTZero, Tian’s AI can correctly identify what is written by a human and what is not. GPTZero works by rating text on how perplexity, complex, and random it is written. GPTZero proved to be very popular and it crashed soon after its release. The app is now in a beta phase that people can sign-up for or they can use on Tian’s Streamlit page.

Tian’s desire to prevent AI plagiarism motivated him to design GPTZero:

“Tian, a former data journalist with the BBC, said that he was motivated to build GPTZero after seeing increased instances of AI plagiarism. ‘Are high school teachers going to want students using ChatGPT to write their history essays? Likely not,’ he tweeted.”

AI writing algorithms are still in their infancy like art generation AI. Writers should not fear job replacement yet. Artistic AI places the arts in the same place paintings were with photography, the radio with television, and libraries with the Internet. Artistic AI will change the mediums, but portions of it will persevere and others will change. AI should be used as tools to improve the process.

Students would never find and use a work-around.

Whitney Grace, January 19, 2023

Google Project Teaches Code to Write Itself

January 18, 2023

Google may have an opportunity to demonstrate how its smart software can deal with the upstart ChatGPT. Google has software that can write more smart software. Imagine. Google “good enough” code to deliver good enough software for good enough search. Maybe I should say, “Once good enough?”

Ever conscious of its bottom line, Google knows saving money on resources is as good as raking in ad dollars. To that end, Yahoo Finance reveals, “Google Has a Secretive Project that Could Reduce the Need for Human Engineers.” Goodbye, pesky payroll. That software does not protest management decisions may be just a side benefit. Writer Jordan Parker Erb reveals:

“The project is part of a broader push by Google into ‘generative AI’ — and it could have profound implications for the company’s future and developers who write code. Generative AI, a tech that uses algorithms to create images, videos, and more, has recently become the hottest thing in Silicon Valley. In this case, the goal is to reduce the need for humans to write and update code, while maintaining code quality. Doing so could greatly impact the work of human engineers in the future. ‘The idea was: how do we go from one version to the next without hiring all these software engineers?’ said a person familiar with the project when it was at X, the company’s moonshot unit.”

According to the brief write-up, the project was called “Pitchfork” while at X. Perhaps not the most sensitive name to workers concerned with being flung out the door. No wonder the company has not trumpeted its existence. The project has since moved to Google Labs, a shift Erb notes reveals its importance to Google execs. The sooner it can be rid of those vexing human employees, the better.

And the pitchfork? A potentially dangerous tool.

Cynthia Murrell, January 10, 2023

Google, Take Two Aspirin and One AlkaSeltzer: It Is Buzz Time for ChatGPT

January 17, 2023

What do you do when the “trust” outfit Thomson Reuters runs a story with this headline? “Davos 2023: CEOs Buzz about ChatGPT-Style AI at World Economic Forum.” If you are like me, you think, “Meh.”

But what if you are a Google / DeepMind wizard?

Now consider this headline: “Google’s Muse Model Could Be the Next Big Thing for Generative AI.” If you are like me, you think, “Sillycon Valley PR.”

But what if you are an OpenAI or Microsoft brainiac?

In terms of reach, I think the Reuters’ story will be diffused to a broader business audience. The subject is something perceived as magnetic. Any carpetlander can get an associate to demonstrate ChatGPT outputting a search result via You.com or some other knowledge product from the numerous demos available with a mouse click.

But to see the Google Muse story, one has to follow a small number of Sillycon Valley outlets. And what if the carpetlander wants to see a demonstration of the magical, super effective Muse? Yeah, use your imagination.

Perhaps Google and its ineffable search team may want to crunch on another couple of aspirin and get some of that chewable antacid stuff. It is going to be a long PR day at Davos.

One doesn’t have to be a business maven to understand that ChatGPT is a nice subject when the options at Davos are war, plummeting demand for some big buck commodities, Germany’s burning lignite, China’s Covid and Taiwan fixation, and similar economically interesting topics.

What will CEO and Davos attendees take away from the ChatGPT buzz? My experience suggests some sort of action, even it is nothing more than investigating whether the technology can deal with pesky customer support inquiries.

And where is Google amidst this buzz? Google has the forward forward, next big thing. Google has academic papers which point out the weaknesses of non Google methods. Google has Muse or at least a news release story about Muse.

Will OpenAI and ChatGPT have legs? Who knows. Good bad or indifferent, ChatGPT has buzz, lots of it. I know because the “trust” outfit says ChatGPT will “transform” the security minded Microsoft. Who knew?

Thus, at this moment in time, Google may become a good customer for over-the-counter headache remedies and AlkaSeltzer. Remember that jingle’s lyrics?

Plop plop, fizz fizz

Oh, what a relief it is.

Maybe ChatGPT will just fade away like hangover or the tummy ache from eating the whole thing? Is it my imagination or is Microsoft chowing down on croissants whilst explaining what ChatGPT will do for its enterprise customers?

I will consult my “muse.” Oh, sorry, not available.

Stephen E Arnold, January 17, 2023

Amazing Statement about Google

January 17, 2023

I am not into Twitter. I think that intelware and policeware vendors find the Twitter content interesting. A few of them may be annoyed that the Twitter application programming interface seems go have gone on a walkabout. One of the analyses of Twitter I noted this morning (January 15, 2023, 1035 am) is “Twitter’s Latest ‘Feature’ Is How You Know Elon Musk Is in Over His Head. It’s the Cautionary Tale Every Business Needs to Hear.”

I want to skip over the Twitter palpitations and focus on one sentence:

At least, with Google, the company is good enough at what it does that you can at least squint and sort of see that when it changes its algorithm, it does it to deliver a better experience to its users–people who search for answers on Google.

What about that “at least”? Also, what do you make of the “you can at least squint and sort of see that when it [Google] changes its algorithm”? Squint to see clearly. Into Google? Hmmm. I can squint all day at a result like this and not see anything except advertising and a plug for the Google Cloud for the query online hosting:

image

Helpful? Sure to Google, not to this user.

Now consider the favorite Google marketing chestnut, “a better experience.” Ads and a plug for Google does not deliver to me a better experience. Compare the results for the “online hosting” query to those from www.you.com:

image

Google is the first result, which suggests some voodoo in the search engine optimization area. The other results point to a free hosting service, a PC Magazine review article (which is often an interesting editorial method to talk about) and an outfit called Online Hosting Solution.

Which is better? Google’s ads and self promotion or the new You.com pointer to Google and some sort of relevant links?

Now let’s run the query “online hosting” on Yandex.com (not the Russian language version). Here’s what I get:

image

Note that the first link is to a particular vendor with no ad label slapped on the link. The other links are to listicle articles which present a group of hosting companies for the person running the query to consider.

Of the three services, which requires the “squint” test. I suppose one can squint at the Google result and conclude that it is just wonderful, just not for objective results. The You.com results are a random list of mostly relevant links. But that top hit pointing at Google Cloud makes me suspicious. Why Google? Why not Amazon AWS, Microsoft Azure, the fascinating Epik.com, or another vendor?

In this set of three, Yandex.com strikes me as delivering cleaner, more on point results. Your mileage may vary.

In my experience, systems which deliver answers are a quest. Most of the systems to which I have been exposed seem the digital equivalent of a ride with Don Quixote. The windmills of relevance remain at risk.

Stephen E Arnold, January 17, 2023

Google and Its PR Response to the ChatGPT Buzz Noise

January 16, 2023

A crazy wave is sweeping through the technology datasphere. ChatGPT, OpenAI, Microsoft, Silicon Valley pundits, and educators are shaken, not stirred, into the next big thing. But where is the Google in this cyclone bomb of smart software? The craze is not for a list of documents matching a user’s query. People like students and spammers are eager for tools that can write, talk, draw pictures, and code. Yes, code more good enough software, by golly.

In this torrential outpouring of commentary, free demonstrations, and venture capitalists’ excitement, I want to ask a simple question: Where’s the Google? Well, to the Google haters, the GOOG is in panic mode. RED ALERT, RED ALERT.

From my point of view, the Google has been busy busy being Google. Its head of search Prabhakar Raghavan is in the spotlight because some believe he has missed the Google bus taking him to the future of search.  The idea is that Googzilla has been napping before heading to Vegas to follow the NCAA basketball tournament in incorrect. Google has been busy, just not in a podcast, talking heads, pundit tweeting way.

Let’s look at two examples of what Google has been up to since ChatGPT became the next big thing in a rather dismal economic environment.

The first is the appearance of a articles about the forward forward method for training smart software. You can read a reasonably good explanation in “What Is the “Forward-Forward” Algorithm, Geoffrey Hinton’s New AI Technique?” The idea is that some of the old-school approaches won’t work in today go-go world. Google, of course, has solved this problem. Did the forward forward thing catch the attention of the luminaries excited about ChatGPT? No. Why? Google is not too good at marketing in my opinion. ChatGPT is destined to be a mere footnote. Yep, a footnote, probably one in multiple Google papers like Understanding Diffusion Models: A Unified Perspective (August 2022). (Trust me. There are quite a few of these papers with comments about the flaws of ChatGPT-type software in the “closings” or “conclusions” to these Google papers.)

The second is the presentation of information about Google’s higher purpose. A good example of this is the recent interview with a Googler involved in the mysterious game-playing, protein-folding outfit called DeepMind. “DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution” does a good job of hitting the themes expressed in technical papers, YouTube video interviews, and breezy presentations at smart software conferences. This is a follow on to Google’s talking with an MIT researcher Lex Fridman about the Google engineer who thought the DeepMind system was a person and a two hour chat with the boss of DeepMind. The CEO video is at this link.

I want to highlight three points from this interview/article.

[A] Let’s look at this passage from the Time Magazine interview with the CEO of DeepMind:

Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

I interpret this to mean that Google has better, faster, cheaper, and smarter NLP technology. Notice the idea of putting competitors in “mere footnotes.” This is an academic, semi-polite way to say, “Loser.”

[B] DeepMind alleged became a unit of Alphabet Google for this reason:

Google was “very happy to accept” DeepMind’s ethical red lines “as part of the acquisition.”

Forget the money. Think “ethical red lines.” Okay, that’s an interesting concept for a company which is in the data hoovering business, sells advertising, has a bureaucratic approach I heard described as described as slime mold, and is being sued for assorted allegations of monopolistic behavior in several countries.

[C] The Time Magazine article includes this statement:

Back at DeepMind’s spiral staircase, an employee explains that the DNA sculpture is designed to rotate, but today the motor is broken. Closer inspection shows some of the rungs of the helix are askew.

Interesting choice of words: “The motor is broken” and “askew.” Is this irony or just the way it is when engineering has to be good enough and advertising powers the buzzing nervous system of the company?

From my point of view, Google has been responding to ChatGPT with academic reminders that the online advertising outfit has a better mousetrap. My thought is that Google knew ChatGPT would be a big deal. That realization sparked the attempt by Google to answer questions with cards and weird little factoids related to the user’s query. The real beef or “wood behind” the program is the catchy forward forward campaign. How is that working out? I don’t have a Google T shirt that spells out Forward Forward. Have you seen one? My research suggests that Google wants to corner the market on low cost training data. Think Snorkel. Google pushes synthetic data because it is not real and, therefore, cannot be dragged into court over improper use of Web-accessible content. Google, I believe, wants to become the Trader Joe’s of off-the-shelf training data and ready-to-run smart software models. The idea has been implemented to some degree at Amazon’s AWS as I recall.

Furthermore, Google’s idea of a PR blitz is talking with an MIT researcher Lex Fridman. Mr. Fridman interviewed the the Google engineer (now a Xoogler) who thought the DeepMind system was a person and sort of alive. Mr. Fridman also spoke with the boss of DeepMind about smart software. (The video is at this link.) The themes are familiar: Great software, more behind the curtains, and doing good with Go and proteins.

Google faces several challenges with its PR effort to respond to ChatGPT:

  1. I am of the opinion that most people, even those involved in smart software, are not aware that Google has been running a PR and marketing campaign to make clear superiority of its system and method. No mere footnote for the Google. We do proteins. We snorkel. We forward forward. The problem is that ChatGPT is everywhere, and people like high school students are talking about it. Even artists are aware of smart software and instant image generation OpenAI style.
  2. Google remains ill equipped to respond to ChatGPT’s sudden thunder showers and wind storms of social buzz. Not even Google’s rise to fame matches what has happened to OpenAI and ChatGPT in the last few months. There are rumors that Microsoft will do more than provided Azure computing resources for ChatGPT. Microsoft may dump hard cash billions into OpenAI. Who is not excited to punch a button and have Microsoft Word write that report for you? I think high school students will embrace the idea; teachers and article writers at CNet, not so much.
  3. Retooling Google’s decades old systems and methods for the snappy ChatGPT approach will take time and money. Google has the money, but in the world of bomb cyclones the company may not have time. Technology fortunes can vaporize quickly like the value of a used Tesla on Cars and Bids.

Net net: Google, believe it or not, has been in its own Googley way trying to respond to its ChatGPT moment. What the company has been doing is interesting. However, unlike some of Google’s technical processes, the online information access world is able to change. Can Google? Will high school students and search engine optimization spam writers care? What about “axis of evil” outfits and their propaganda agencies? What about users who do not know when a machine-generated output is dead wrong? Google may not face an existential crisis, but the company definitely knows something is shaking the once-solid foundations of the buildings on Shoreline Drive.

Stephen E Arnold, January 16, 2023

Ah, Google Logic: The Internet Will Be Ruined If You Regulate Us!

January 16, 2023

I have to give Google credit for crazy logic and chutzpah if the information in “Google to SCOTUS: Liability for Promoting Terrorist Videos Will Ruin the Internet” is on the money. The write up presents as Truth this statement:

Google claimed that denying that Section 230 protections apply to YouTube’s recommendation engine would remove shields protecting all websites using algorithms to sort and surface relevant content—from search engines to online shopping websites. This, Google warned, would trigger “devastating spillover effects” that would devolve the Internet “into a disorganized mess and a litigation minefield”—which is exactly what Section 230 was designed to prevent. It seems that in Google’s view, a ruling against Google would transform the Internet into a dystopia where all websites and even individual users could potentially be sued for sharing links to content deemed offensive. In a statement, Google general counsel Halimah DeLaine Prado said that such liability would lead some bigger websites to overly censor content out of extreme caution, while websites with fewer resources would probably go the other direction and censor nothing.

I think this means the really super duper, magical Internet will be rendered even worse that some people think it is.

I must admit that Google has the money to hire people who will explain a potential revenue hit in catastrophic, life changing, universe disrupting lingo.

Let’s step back. Section 230 was a license to cut down the redwoods of publishing and cover the earth with synthetic grass. The effectiveness of the online ad model generated lots of dough, provided oodles of mouse pads and T shirts to certain people, and provided an easy way to destroy precision and recall in search.

Yep, a synthetic world. Why would Google see any type of legal or regulatory change as really bad … for Google. Interested in some potentially interesting content. Check out YouTube videos retrieved by entering the word “Nasheed.” Want some short cuts to commercial software. Run a query on YouTube for “sony vegas 19 crack.” Curious about the content that entertains some adults with weird tastes. Navigate to YouTube and run a query for “grade school swim parties.”

Alternatively one can navigate to Google.com and enter these queries for fun and excitement:

  • ammonium and urea nitrate combustion
  • afghan taliban recruitment requirements
  • principal components of methamphetamine

Other interesting queries are supported by Google. Why? Because the company abandoned the crazy idea that an editorial policy, published guidelines for acceptable content, and a lack of informed regulation makes it easy for Google to do whatever it wants.

Now that sense of entitlement and the tech wizard myth is fading. Google has a reason to be frightened. Right now the company is thrashing internally in Code Red mode, banking on the fact that OpenAI will not match Google’s progress in synthetic data, and sticking its talons into the dike in order to control leaks.

What are these leaks? How about cost control, personnel issues, the European Union and its regulators, online advertising competitors, and the perception that Google Search is maybe less interesting that the ChatGPT thing that one of the super analysts explained this way in superlatives and over the top lingo:

There is so much more to write about AI’s potential impact, but this Article is already plenty long. OpenAI is obviously the most interesting from a new company perspective: it is possible that OpenAI becomes the platform on which all other AI companies are built, which would ultimately mean the economic value of AI outside of OpenAI may be fairly modest; this is also the bull case for Google, as they [sic] would be the most well-palace to be the Microsoft to OpenAI’s AWS.

I have put in bold face the superlatives and categorical words and phrases used by the author of “AI and the Big Five.”

Now let’s step in more closely. Google’s appeal is an indication that Google is getting just a tad desperate. Sure it has billions. It is a giant business. But it is a company based on ad technology which is believed to have been inspired by Yahoo, Overture, GoTo ideas. I seem to recall that prior to the IPO a legal matter was resolved with that wildly erratic Yahoo crowd.

We are in the here an now. My hunch is that Google’s legal teams (note the plural) will be busy in 2023. It is not clear how much the company will have to pay and change to deal with a world in which Googley is not as exciting as the cheerleaders who want so much for a new world order of smart software.

What was my point about synthetic data? Stay tuned.

Stephen E Arnold, January 16, 2023

Reproducibility: Academics and Smart Software Share a Quirk

January 15, 2023

I can understand why a human fakes data in a journal article or a grant document. Tenure and government money perhaps. I think I understand why smart software exhibits this same flaw. Humans put their thumbs (intentionally or inadvertently) put their thumbs on the button setting thresholds and computational sequences.

The key point is, “Which flaw producer is cheaper and faster: Human or code?” My hunch is that smart software wins because in the long run it cannot sue for discrimination, take vacations, and play table tennis at work. The downstream consequence may be that some humans get sicker or die. Let’s ask a hypothetical smart software engineer this question, “Do you care if your model and system causes harm?” I theorize that at least one of the software engineer wizards I know would say, “Not my problem.” The other would say, “Call 1-8-0-0-Y-O-U-W-I-S-H and file a complaint.”

Wowza.

The Reproducibility Issues That Haunt Health-Care AI” states:

a data scientist at Harvard Medical School in Boston, Massachusetts, acquired the ten best-performing algorithms and challenged them on a subset of the data used in the original competition. On these data, the algorithms topped out at 60–70% accuracy, Yu says. In some cases, they were effectively coin tosses1. “Almost all of these award-winning models failed miserably,” he [Kun-Hsing Yu, Harvard]  says. “That was kind of surprising to us.”

Wowza wowza.

Will smart software get better? Sure. More data. More better. Think of the start ups. Think of the upsides. Think positively.

I want to point out that smart software may raise an interesting issue: Are flaws inherent because of the humans who created the models and selected the data? Or, are the flaws inherent in the algorithmic procedures buried deep in the smart software?

A palpable desire exists and hopes to find and implement a technology that creates jobs, rejuices some venture activities, and allows the questionable idea that technology to solve problems and does not create new ones.

What’s the quirk humans and smart software share? Being wrong.

Stephen E Arnold, January 15, 2023

Becoming Sort of Invisible

January 13, 2023

When it comes to spying on one’s citizens, China is second to none. But at least some surveillance tech can be thwarted with enough time, effort, and creativity, we learn from Vice in, “Chinese Students Invent Coat that Makes People Invisible to AI Security Cameras.” Reporter Koh Ewe describes China’s current surveillance situation:

“China boasts a notorious state-of-the-art state surveillance system that is known to infringe on the privacy of its citizens and target the regime’s political opponents. In 2019, the country was home to eight of the ten most surveilled cities in the world. Today, AI identification technologies are used by the government and companies alike, from identifying ‘suspicious’ Muslims in Xinjiang to discouraging children from late-night gaming.”

Yet four graduate students at China’s Wuhan University found a way to slip past one type of surveillance with their InvisDefense coat. Resembling any other fashion camouflage jacket, the garment includes thermal devices that emit different temperatures to skew cameras’ infrared thermal imaging. In tests using campus security cameras, the team reduced the AI’s accuracy by 57%. That number could have been higher if they did not also have to keep the coat from looking suspicious to human eyes. Nevertheless, it was enough to capture first prize at the Huwei Cup cybersecurity contest.

But wait, if the students were working to subvert state security, why compete in a high-profile competition? The team asserts it was actually working to help its beneficent rulers by identifying a weakness so it could be addressed. According to researcher Wei Hui, who designed the core algorithm:

“The fact that security cameras cannot detect the InvisDefense coat means that they are flawed. We are also working on this project to stimulate the development of existing machine vision technology, because we’re basically finding loophole.”

And yet, Wei also stated,

“Security cameras using AI technology are everywhere. They pervade our lives. Our privacy is exposed under machine vision. We designed this product to counter malicious detection, to protect people’s privacy and safety in certain circumstances.”

Hmm. We learn the coat will be for sale to the tune of ¥500 (about $71). We are sure al list of those who purchase such a garment will be helpful, particularly to the Chinese government.

Cynthia Murrell, January 13, 2023

Semantic Search for arXiv Papers

January 12, 2023

An artificial intelligence research engineer named Tom Tumiel (InstaDeep) created a Web site called arXivxplorer.com.

imageAccording to his Twitter message (posted on January 7, 2023), the system is a “semantic search engine.” The service implements OpenAI’s embedding model. The idea is that this search method allows a user to “find the most relevant papers.” There is a stream of tweets at this link about the service. Mr. Tumiel states:

I’ve even discovered a few interesting papers I hadn’t seen before using traditional search tools like Google or arXiv’s own search function or even from the ML twitter hive mind… One can search for similar or “more like this” papers by “pasting the arXiv url directly” in the search box or “click the More Like This” button.

I ran several test queries, including this one: “Google Eigenvector.” The system surfaced generally useful papers, including one from January 2022. However, when I included the date 2023 in the search string, arXiv Xplorer did not return a null set. The system displayed hits which did not include the date.

Several quick observations:

  1. The system seems to be “time blind,” which is a common feature of modern search systems
  2. The system provides the abstract when one clicks on a link. The “view” button in the pop up displays the PDF
  3. Comparing result sets from the query with additional search terms surfaces papers reduces the result set size, a refreshing change from queries which display “infinite scrolling” of irrelevant documents.

For those interested in academic or research papers, will OpenAI become aware of the value of dates, limiting queries to endnotes, and displaying a relationship map among topics or authors in a manner similar to Maltego? By combining more search controls with the OpenAI content and query processing, the service might leapfrog the Lucene/Solr type methods. I think that would be a good thing.

Will the implementation of this system add to Google’s search anxiety? My hunch is that Google is not sure what causes the Google system to perturb ate. It may well be that the twitching, the sudden changes in direction, and the coverage of OpenAI itself in blogs may be the equivalent of tremors, soft speaking, and managerial dizziness. Oh, my, that sounds serious.

Stephen E Arnold, January 12, 2022

Spammers, Propagandists, and Phishers Rejoice: ChatGPT Is Here

January 12, 2023

AI-generate dart is already receiving tons of backlash from the artistic community and now writers should trade lightly because according to the No Film School said, “You Will Be Impacted By AI Writing…Here Is How.” Hollywood is not a friendly place, but it is certainly weird. Scriptwriters deal with all personalities, especially bad actors, who comment on their work. Now AI algorithms will offer notes on their scripts too.

ChatGPT is a new AI tool that blurs the line between art and aggregation because it can “help” scriptwriters with their work aka made writers obsolete:

“ChatGPT, and programs like it, scan the internet to help people write different prompts. And we’re seeing it begin to be employed by Hollywood as well. Over the last few days, people have gone viral on Twitter asking the AI interface to write one-act plays based on sentences you type in, as well as answer questions….This is what the program spat back out at me:

‘There is concern among some writers and directors in Hollywood that the use of AI in the entertainment industry could lead to the creation of content that is indistinguishable from human-generated content. This could potentially lead to the loss of jobs for writers and directors, as AI algorithms could be used to automate the process of creating content. Additionally, there is concern that the use of AI in Hollywood could result in the creation of content that is formulaic and lacks the creativity and uniqueness that is typically associated with human-generated content.’”

Egads, that is some good copy! AI automation, however, lacks the spontaneity of human creativity. But the machine generated prose is good enough for spammers, propagandists, phishers, and college students.

Humans are still needed to break the formulaic, status quo, but Hollywood bigwigs only see dollar signs and not art. AI create laughable stories, but they are getting better all the time. AI could and might automate the industry, but the human factor is still needed. The bigger question is: How will humanity’s role change in entertainment?

Whitney Grace, January 12, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta