Wanna Be an MBA? You Got It and for Only $45US

June 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I managed to eek out of college as an ABD or All But Dissertation. (How useful would it be for me to write 200 pages about Chaucer’s alleged knowledge of the thrilling Apocrypha?) So no MBA or Doctor of Business Administration or the more lofty PhD in Finance. I am a 78 year old humanoid proudly representing the dull-normal in my cohort.

6 24 college grad

“So you got your MBA from which school?” asks the human people manager. The interviewee says, “I got it from an online course.” “Do you have student loans?” queries the interviewer. “Nah, the degree equivalent cost me about $50,” explains the graduate. “Where did you get the tassel and robe?” probes the keen eyed interviewer at blue chip consulting firm. The motivated MBA offers, “At the Goodwill store.” The image is the MFA grade output from MidJourney.

But you — yes, you, gentle reader — can do better. You can become a Master of Business Administration. You will be wined (or it that whined) and dined by blue chip consulting firms. You can teach as a prestigious adjunct professor before you work at Wal-Mart or tutor high school kids in math. You will be an MBA, laboring at one of those ethics factories more commonly known as venture capital firms. Imagine.

How can this be yours? Just pony up $45US and study MBA topics on your own. “This MBA Training Course Bundle Is 87% Off Right Now.” The article breathlessly explains:

The courses are for beginners and require no previous experience with the business world. Pick and choose which courses you want to complete, or take the whole package to maximize your knowledge. Work through materials at your own pace (since you have lifetime access) right on your mobile or desktop device.

There is an unfortunate disclaimer; to wit:

This course bundle will not replace a formal MBA degree—but it can get you some prior knowledge before pursuing one or give you certificates to include on your resume. Or, if you’re an aspiring entrepreneur, you may just be searching for some tips from experts.

A quick visit to a Web search system for “cheap online PhD” can convert that MBA learning into even more exciting job prospects.

The Beyond Search goose says, “Act now and become an eagle. Unlike me a silly goose.”

Stephen E Arnold, June 30, 2023

Accuracy: AI Struggles with the Concept

June 30, 2023

For those who find reading and understanding research papers daunting, algorithms can help. At least according to the write-up, “5 AI Tools for Summarizing a Research Paper” at Cointelegraph. Writer Alice Ivey emphasizes research articles can be full of jargon, complex ideas, and technical descriptions, making them tricky for anyone outside the researchers’ field. It is AI to the rescue! That is, as long as you don’t mind summaries that contain a few errors. We learn:

“Artificial intelligence (AI)-powered tools that provide support for tackling the complexity of reading research papers can be used to solve this complexity. They can produce succinct summaries, make the language simpler, provide contextualization, extract pertinent data, and provide answers to certain questions. By leveraging these tools, researchers can save time and enhance their understanding of complex papers.

But it’s crucial to keep in mind that AI tools should support human analysis and critical thinking rather than substitute for them. In order to ensure the correctness and reliability of the data collected from research publications, researchers should exercise caution and use their domain experience to check and analyze the outputs generated by AI techniques. … It’s crucial to keep in mind that AI tools may not always accurately capture the context of the original publication, even though they can help summarize research papers.”

So, one must be familiar with the area of study to judge whether the AI got it right. Doesn’t that defeat the purpose? One can imagine scenarios where relying on misinformation could have serious consequences. Or at least some embarrassment.

The article lists ChatGPT, QuillBot, SciSpacy, IBM Watson Discovery, and Semantic Scholar as our handy but potentially inaccurate AI explainers. Some readers may possess the knowledge needed to recognize a faulty summary and think such tools may at least save them a bit of time. It would be nice to know how much one would pay for that convenience, but that small detail is missing from the write-up. ChatGPT, for example, is $240 per year. It might be more cost effective to just read the articles for oneself.

Cynthia Murrell, June 30, 2023

Databricks: Signal to MBAs and Data Wranglers That Is Tough to Ignore

June 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do you remember the black and white pictures of the Pullman riots? No, okay. Steel worker strikes in Pittsburgh? No. Scuffling outside of Detroit auto plants? No. Those images may be helpful to get a sense of what newly disenfranchised MBAs and data wranglers will be doing in the weeks and months ahead.

Databricks Revolutionizes Business Data Analysis with AI Assistant” explains that the Databricks smart software

interprets the query, retrieves the relevant data, reads and analyzes it, and produces meaningful answers. This groundbreaking approach eliminates the need for specialized technical knowledge, democratizing data analysis and making it accessible to a wider range of users within an organization. One of the key advantages of Databricks’ AI assistant is its ability to be trained on a company’s own data. Unlike generic AI systems that rely on data from the internet, LakehouseIQ quickly adapts to the specific nuances of a company’s operations, such as fiscal year dates and industry-specific jargon. By training the AI on the customer’s specific data, Databricks ensures that the system truly understands the domain in which it operates.

6 29 angry analysts

MidJourney has delivered an interesting image (completely original, of course) depicting angry MBAs and data wranglers massing in Midtown and preparing to storm one of the quasi monopolies which care about their users, employees, the environment, and bunny rabbits. Will these professionals react like those in other management-labor dust ups?

Databricks appears to be one of the outfits applying smart software to reduce or eliminate professional white collar work done by those who buy $7 lattes, wear designer T shirts, and don wonky sneakers for important professional meetings.

 

The DEO of Databricks (a data management and analytics firm) says:

By training their AI assistant on the customer’s specific data, Databricks ensures that it comprehends the jargon and intricacies of the customer’s industry, leading to more accurate and insightful analysis.

My interpretation of the article is simple: If the Databricks’ system works, the MBA and data wranglers will be out of a job. Furthermore, my view is that if systems like Databricks works as advertised, the shift from expensive and unreliable humans will not be gradual. Think phase change. One moment you have a solid and then you have plasma. Hot plasma can vaporize organic compounds in some circumstances. Maybe MBAs and data wranglers are impervious? On the other hand, maybe not.

Stephen E Arnold, June 29, 2023

Microsoft: A Faint Signal from Employees or Just Noise from Gousers?

June 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I spotted this story in my newsfeed this morning: “Leaked Internal Microsoft Poll Shows Fewer Employees Have Confident in Leadership and Gave the Worst Score to a Question about Whether Working There Is a Good Deal.”

My yellow lights began to flash. I have no way of knowing if the data were compiled in a rigorous, Statistics 101 manner. I have no way of determining if the data were just made up the way a certain big wheel at Stanford University handled “real” data. I have no way of knowing if the  write up and the facts were a hallucination generated by a “good enough” Microsoft Edge smart output.

Nevertheless, I found the write up amusing.

Consider this passage:

The question about confidence in leaders got an average of 73% favorable responses across the company in this year’s poll compared to 78% in last year’s, according to results viewed by Insider.

I think that means the game play, the Solarwinds’ continuing siroc, and the craziness of moments (if this does not resonate, don’t ask).

Let’s assume that the data are faked or misstated. The question which arises is here in Harrod’s Creek, Kentucky, is: Why now?

Stephen E Arnold, June 29, 2023

Annoying Humans Bedevil Smart Software

June 29, 2023

Humans are inherently biased. While sexist, ethnic, and socioeconomic prejudices are implied as the cause behind biases, unconscious obliviousness is more likely to be the culprit. Whatever causes us to be biased, AI developers are unfortunately teaching AI algorithms our fallacies. Bloomberg investigates how AI is being taught bad habits in the article, “Humans Are Biased, Generative AI Is Even Worse.”

Stable Diffusion is one of the may AI bots that generates images from text prompts. Based on these prompts, it delivers images that display an inherent bias in favor of white men and discriminates against women and brown-skinned people. Using Stable Diffusion, Bloomber conducted a test of 5000 AI images They were analyzed and found that Stable Diffusion is more racist and sexist than real-life.

While Stable Diffusion and other text-to-image AI are entertaining, they are already employed by politicians and corporations. AI-generated images and videos set a dangerous precedent, because it allows bad actors to propagate false information ranging from conspiracy theories to harmful ideologies. Ethical advocates, politicians, and some AI leaders are lobbying for moral guidelines, but a majority of tech leaders and politicians are not concerned:

“Industry researchers have been ringing the alarm for years on the risk of bias being baked into advanced AI models, and now EU lawmakers are considering proposals for safeguards to address some of these issues. Last month, the US Senate held a hearing with panelists including OpenAI CEO Sam Altman that discussed the risks of AI and the need for regulation. More than 31,000 people, including SpaceX CEO Elon Musk and Apple co-founder Steve Wozniak, have signed a petition posted in March calling for a six-month pause in AI research and development to answer questions around regulation and ethics. (Less than a month later, Musk announced he would launch a new AI chatbot.) A spate of corporate layoffs and organizational changes this year affecting AI ethicists may signal that tech companies are becoming less concerned about these risks as competition to launch real products intensifies.”

Biased datasets for AI are not new. AI developers must create more diverse and “clean” data that incorporates a true, real-life depiction. The answer may be synthetic data; that is, human involvement is minimized — except when the system has been set up.

Whitney Grace, June 29, 2023

Google: Users and Its Ad Construction

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the last 48 hours, I have heard or learned about some fresh opinions about Alphabet / Google / YouTube (hereinafter AGY). Google Glass III (don’t forget the commercial version, please) has been killed. Augmented Reality? Not for the Google. Also, AGY continues to output promises about its next Bard. Is it really better than ChatGPT? And AGY is back in the games business. (Keep in mind that Google pitched Yahoo with a games deal in 2004 if I remember correctly and then flamed out with its underwhelming online game play a decade later which was followed by the somewhat forgettable Stadia game service. ) Finally, a person told me that Prabhakar Raghavan allegedly said, “We want our customers to be happy.” Inspirational indeed. I think I hit the highlights from the information I encountered since Monday, June 25, 2023.

6 28 bad foundation

The ever sensitive creator MidJourney provided this illustration of a structure with a questionable foundation. Could the construct lose a piece here and piece there until it must be dismantled to save the snail darters living in the dormers? Are the residents aware of the issue?

The fountain of Googliness seems to be copious. I read “Google Ads Can Do More for Its Customers.” The main point of the article is that:

Google’s dominance in the search engine industry, particularly in search ads, is unparalleled, making it virtually the only viable option for advertisers seeking to target search traffic. It’s a conflict of interest, as Google’s profitability is closely tied to ad revenue. As Google doesn’t do enough to make Google Ads a more transparent platform and reduce the cost for its customers, advertisers face inflated costs and fierce competition, making it challenging for smaller businesses with limited budgets to compete effectively.

Gulp. If I understand this statement, Google is exploiting its customers. Remember. These are the entities providing the money to fund AGY’s numerous administrative costs. These are going just one way: Up and up. Imagine the data center, legal fines, and litigation costs. Big numbers before adding in salaries and bonuses.

Observations:

  1. Structural weakness can be ignored until the edifice just collapses.
  2. Unhappy customers might want to drop by for a conversation and the additional weight of these humanoids may cross a tipping point.
  3. US regulators may ignore AGY, but government officials in other countries may not.

Bud Light’s adventures with its customers provide a useful glimpse of that those who are unhappy can do and do quickly. The former Bud Light marketing whiz has a degree from Harvard. Perhaps this individual can tackle the AGY brand? Just a thought.

Stephen E Arnold, June 28, 2023

Harvard University: Ethics and Efficiency in Teaching

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

You are familiar with Harvard University, the school of broad endowments and a professor who allegedly made up data and criticized colleagues for taking similar liberties with the “truth.” For more color about this esteemed Harvard professional read “Harvard Behavioral Scientist Who Studies Dishonesty Is Accused of Fabricating Data.”

Now the academic home of William James many notable experts in ethics, truth, reasoning, and fund raising has made an interesting decision. “Harvard’s New Computer Science Teacher Is a Chatbot.”

6 24 robot teach3er fixed

A terrified 17 year old from an affluent family in Brookline asks, “Professor Robot, will my social acceptance score be reduced if I do not understand how to complete the programming assignment?” The inspirational image is an output from the copyright compliant and ever helpful MidJourney service.

The article published in the UK “real” newspaper The Independent reports:

Harvard University plans to use an AI chatbot similar to ChatGPT as an instructor on its flagship coding course.

The write up adds:

The AI teaching bot will offer feedback to students, helping to find bugs in their code or give feedback on their work…

Once installed and operating, the chatbot will be the equivalent of a human teaching students how to make computers do what the programmer wants? Hmmm.

Several questions:

  1. Will the Harvard chatbot, like a living, breathing Harvard ethics professor make up answers?
  2. Will the Harvard chatbot be cheaper to operate than a super motivated, thrillingly capable adjunct professor, graduate student, or doddering lecturer close to retirement?
  3. Why does an institution like Harvard lack the infrastructure to teach humans with humans?
  4. Will the use of chatbot output code be considered original work?

But as one maverick professors keeps saying, “Just getting admitted to a prestigious university punches one’s employment ticket.”

That’s the spirit of modem education. As William James, a professor from a long and dusty era said:

The world we see that seems so insane is the result of a belief system that is not working. To perceive the world differently, we must be willing to change our belief system, let the past slip away, expand our sense of now, and dissolve the fear in our minds.

Should students fear algorithms teaching them how to think?

Stephen E Arnold, June 28, 2023

Dust Up: Social Justice and STEM Publishing

June 28, 2023

Are you familiar with “social justice warriors?” These are people who. Take it upon themselves to police the world for their moral causes, usually from a self-righteous standpoint. Social justice warriors are also known my the acronym SJWs and can cross over into the infamous Karen zone. Unfortunately Heterodox STEM reports SJWs have invaded the science community and Anna Krylov and Jay Tanzman discussed the issue in their paper: “Critical Social Justice Subverts Scientific Publishing.”

SJWs advocate for the politicization of science, adding an ideology to scientific research also known as critical social justice (CSJ). It upends the true purpose of science which is to help and advance humanity. CSJ adds censorship, scholarship suppression, and social engineering to science.

Krylov and Tanzmans’ paper was presented at the Perils for Science in Democracies and Authoritarian Countries and they argue CSJ harms scientific research than helps it. They compare CSJ to Orwell’s fictional Ministry of Love; although real life examples such as Josef Goebbels’s Nazi Ministry of Propaganda, the USSR’s Department for Agitation and Propaganda, and China’s authoritarian regime work better. CSJ is the opposite of the Enlightenment that liberated human psyches from religious and royal dogmas. The Enlightenment engendered critical thinking, the scientific process, philosophy, and discovery. The world became more tolerant, wealthier, educated, and healthier as a result.

CSJ creates censorship and paranoia akin to tyrannical regimes:

“According to CSJ ideologues, the very language we use to communicate our findings is a minefield of offenses. Professional societies, universities, and publishing houses have produced volumes dedicated to “inclusive” language that contain long lists of proscribed words that purportedly can cause offense and—according to the DEI bureaucracy that promulgates these initiatives—perpetuate inequality and exclusion of some groups, disadvantage women, and promote patriarchy, racism, sexism, ableism, and other isms. The lists of forbidden terms include “master database,” “older software,” “motherboard,” “dummy variable,” “black and white thinking,” “strawman,” “picnic,” and “long time no see” (Krylov 2021: 5371, Krylov et al. 2022: 32, McWhorter 2022, Paul 2023, Packer 2023, Anonymous 2022). The Google Inclusive Language Guide even proscribes the term “smart phones” (Krauss 2022). The Inclusivity Style  Guide of the American Chemical Society (2023)—a major chemistry publisher of more than 100 titles—advises against using such terms as “double blind studies,” “healthy weight,” “sanity check,” “black market,” “the New World,” and “dark times”…”

New meanings that cause offense are projected onto benign words and their use is taken out of context. At this rate, everything people say will be considered offensive, including the most uncontroversial topic: the weather.

Science must be free from CSJ ideologies but also corporate ideologies that promote profit margins. Examples from American history include, Big Tobacco, sugar manufacturers, and Big Pharma.

Whitney Grace, June 28, 2023

Digital Work: Pick Up the Rake and Get with the Program

June 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The sky is falling, according to “AI Is Killing the Old Web, And the New Web Struggles to Be Born.” What’s the answer? Read publications like the Verge online, of course. At least, that is the message I received from this essay. (I think I could hear the author whispering, “AI will kill us all, and I will lose my job. But this essay is a rizz. NYT, here I come.”)

6 27 raking leaves

This grumpy young person says, “My brother dropped the car keys in the leaves. Now I have to rake— like actually rake — to find them. My brother is a dork and my life is over.” Is there an easy, quick fix? No, the sky — not leaves — are falling when it comes to finding information, according to the Verge, a Silicon Valley-type “real” news outfit. MidJourney, you have almost captured the dour look of a young person who must do work.

I noted this statement in the essay:

AI-generated misinformation is insidious because it’s often invisible. It’s fluent but not grounded in real-world experience, and so it takes time and expertise to unpick. If machine-generated content supplants human authorship, it would be hard — impossible, even — to fully map the damage. And yes, people are plentiful sources of misinformation, too, but if AI systems also choke out the platforms where human expertise currently thrives, then there will be less opportunity to remedy our collective errors.

Thump. The sky allegedly has fallen. The author, like the teen in the illustration is faced with work; that is, the task of raking, bagging, and hauling the trash to the burn pit.

What a novel concept! Intellectual work; that is, sifting through information and discarding the garbage. Prior to Gutenberg, one asked around, found a person who knew something, and asked the individual, “How do I make a horseshoe.” After Gutenberg, one had to find, read, and learn information.” With online, free services are supposed to just cough up the answer. The idea is that the leaves put themselves in the garbage bags and the missing keys appear. It’s magic or one of those Apple tracking devices.

News flash.

Each type of finding tool requires work. Yep, effort. In order to locate information, one has to do work. Does the thumb typing, TikTok consuming person want to do work? From my point of view, work is not on the menu at Philz Coffee.

New tools, different finding methods, and effort are required to rake the intellectual leaves and reveal the lawn. In the comments to the article, Barb3d says:

It’s clear from his article that James Vincent is more concerned about his own relevance in an AI-powered future than he is about the state of the web. His alarmist view of AI’s role in the web landscape appears to be driven more by personal fear of obsolescence than by objective analysis.

My view is that the Verge is concerned about its role as a modern Oracle of Delphi. The sky-is-falling itself is click bait. The silliness of the Silicon Valley “real” news outfit vibrates in the write up. I would point out that the article itself is derivative of another article from an online service Tom’s Hardware.

The author allegedly talked to one expert in hiking boots. That’s a good start. The longest journey begins with a single step. But learning how to make a horse shoe and an opinion about which boot to purchase are two different tasks. One is instrumental and the other is fashion.

No, software advances won’t kill the Web as “we” know it. As Barb3d says, “Adapt.” Or in my lingo, pick up the rake, quit complaining, and find the keys.

Stephen E Arnold, June 27, 2023

Google: I Promise to Do Better. No, Really, Really Better This Time

June 27, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The UK online publication The Register made available this article: “Google Accused of Urging Android Devs to Mislabel Apps to Get Forbidden Kids Ad Data.” The write up is not about TikTok. The subject is Google and an interesting alleged action by the online advertising company.

6 24 i promise

The high school science club member who pranked the principal says when caught: “Listen to me, Mr. Principal. I promise I won’t make that mistake again. Honest. Cross my heart and hope to die. Boy scout’s honor. No, really. Never, ever, again.” The illustration was generated by the plagiarism-free MidJourney.

The write up states as “actual factual” behavior by the company:

The complaint says that both Google and app developers creating DFF apps stood to gain by not applying the strict “intended for children” label. And it claims that Google incentivized this mislabeling by promising developers more advertising revenue for mixed-audience apps.

The idea is that intentionally assigned metadata made it possible for Google to acquire information about a child’s online activity.

My initial reaction was, “What’s new? Google says one thing and then demonstrates it adolescent sense of cleverness via a workaround?

After a conversation with my team, I formulated a different hypothesis; specifically, Google has institutionalized mechanisms to make it possible for the company’s actual behavior to be whatever the company wants its behavior to be.

One can hope this was a one-time glitch. My “different hypothesis” points to a cultural and structural policy to make it possible for the company to do what’s necessary to achieve its objective.

Stephen E Arnold, June 27, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta