The OpenAI Algorithm: More Data Plus More Money Equals More Intelligence

November 13, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The Financial Times (I continue to think of this publication as the weird orange newspaper) published an interview converted to a news story. The title is an interesting one; to wit: “OpenAI Chief Seeks New Microsoft Funds to Build Superintelligence.” Too bad the story is about the bro culture in the Silicon Valley race to become the king of smart software’s revenue streams.

The hook for the write up is Sam Altman (I interpret the wizard’s name as Sam AI-Man), who appears to be fighting a bro battle with the Google’s, the current champion of online advertising. At stake is a winner takes all goal in the next big thing, smart software.

In the clubby world of smart software, I find the posturing of Google and OpenAI an extension of the mentality which pits owners of Ferraris (slick, expensive, and novel machines) in a battle of for the opponent’s hallucinating machine. The patter goes like this, “My Ferrari is faster, better looking, and brighter red than yours,” one owner says. The other owner says, “My Ferrari is newer, better designed, and has a storage bin”.) This is man cave speak for what counts.

image

When tech bros talk about their powerful machines, the real subject is what makes a man a man. In this case the defining qualities are money and potency. Thanks, Microsoft Bing, I have looked at the autos in the Microsoft and Google parking lots. Cool, macho.

The write up introduces what I think is a novel term: “Magic intelligence.” That’s T shirt grade sloganeering. The idea is that smart software will become like a person, just smarter.

One passage in the write up struck me as particularly important. The subject is orchestration, which is not the word Sam AI-Man uses. The idea is that the smart software will knit together the processes necessary to complete complex tasks. By definition, some tasks will be designed for the smart software. Others will be intended to make super duper for the less intelligent humanoids. Sam AI-Man is quoted by the Financial Times as saying:

“The vision is to make AGI, figure out how to make it safe . . . and figure out the benefits,” he said. Pointing to the launch of GPTs, he said OpenAI was working to build more autonomous agents that can perform tasks and actions, such as executing code, making payments, sending emails or filing claims. “We will make these agents more and more powerful . . . and the actions will get more and more complex from here,” he said. “The amount of business value that will come from being able to do that in every category, I think, is pretty good.”

The other interesting passage, in my opinion, is the one which suggests that the Google is not embracing the large language model approach. If the Google has discarded LLMs, the online advertising behemoth is embracing other, unnamed methods. Perhaps these are “small language models” in order to reduce costs and minimize the legal vulnerability some thing the LLM method beckons. Here’s the passage from the FT’s article:

While OpenAI has focused primarily on LLMs, its competitors have been pursuing alternative research strategies to advance AI. Altman said his team believed that language was a “great way to compress information” and therefore developing intelligence, a factor he thought that the likes of Google DeepMind had missed. “[Other companies] have a lot of smart people. But they did not do it. They did not do it even after I thought we kind of had proved it with GPT-3,” he said.

I find the bro jockeying interesting for three reasons:

  1. An intellectual jousting tournament is underway. Which digital knight will win? Both the Google and OpenAI appear to believe that the winner comes from a small group of contestants. (I wonder if non-US jousters are part of the equation “more data plus more money equals more intelligence”?
  2. OpenAI seems to be driving toward “beyond human” intelligence or possibly a form of artificial general intelligence. Google, on the other hand, is chasing a wimpier outcome.
  3. Outfits like the Financial Times are hot on the AI story. Why? The automated newsroom without humans promises to reduce costs perhaps?

Net net: AI vendors, rev your engines for superintelligence or magic intelligence or whatever jargon connotes more, more, more.

Stephen E Arnold, November 13, 2023

test

Smart Software Generates Lots of Wizards Who Need Not Know Much at All

October 25, 2023

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

How great is this headline? “DataGPT Users Generative AI to Transform Every Employee into a Skilled Business Analyst.” I am not sure I buy into the categorical affirmation of the “every employee.” As a dinobaby, I am skeptical of hallucinating algorithms and the exciting gradient descent delivered by some large language models.

image

“Smart software will turn everyone of you into a skilled analyst,” asserts the teacher. The students believe her because it means no homework and more time for TikTok and YouTube. Isn’t modern life great for students?

The write up presents as chiseled-in-stone truth:

By uniting conversational AI with a proprietary database and the most advanced data analytics techniques, DataGPT says, its platform can proactively uncover insights for any user in any company. Nontechnical users can type natural language questions in a familiar chat window interface, in the same way as they might question a human colleague. Questions such as “Why is our revenue down this week?” will be answered in seconds, and users can then dig deeper through additional prompts, such as “Tell me more about the drop from influencer partnerships” to understand the real reasons why it’s happening.

Hyperbolic marketing, 20-something PR, desperate fund raiser promises, or reality? If the assertions in the article are accurate, those students will have jobs and become top analysts without much bookwork or thrilling calculations requiring silliness like multivariate statistics  or polynomial regression. Who needs this silliness?

Here’s what an expert says about this job making, work reducing, and accuracy producing approach:

Doug Henschen of Constellation Research Inc. said DataGPT’s platform looks to be a compelling and useful tool for many company employees, but questioned the veracity of the startup’s claim to be debuting an industry first. “Most of the leading BI and analytics vendors have announced generative AI capabilities themselves, with ThoughtSpot and MicroStrategy two major examples,” Henschen said. “We can’t discount OpenAI either, which introduced the OpenAI Advanced Data Analysis feature for ChatGPT Plus a few months ago.”

Truly amazing, and I have no doubt that this categorically affirmative will make everyone a business analyst. Believe it or not. I am in the “not” camp. Content marketing and unsupported assertions are amusing, just not the reality I inhabit as a dinobaby. Every? Baloney.

Stephen E Arnold, October 25, 2023

xx

HP Innovation: Yes, Emulate Apple and Talk about AI

October 24, 2023

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

Amazing, according to the Freedictionary means “ To affect with great wonder; astonish.” I relate to the archaic meaning of the word; to wit: “To bewilder; perplex.” I was bewildered when I read about HP’s “magic.” But I am a dinobaby. What do I know? Not much but …

I read “The Magic Presented at HP Imagine 2023.” Yep, magic. The write up profiles HP innovations. These were presented in “stellar fashion.” The speaker was HP’s PR officer. According to the write up:

It stands as one of the best-executed presentations I’ve ever attended.

Not to me. Such understatement. Such a subtle handling of brilliant innovations at HP.

Let’s check out these remarkable examples cited in the article by a person who is clearly objective, level headed, and digging into technology because it is just the right thing to do. Here we go: Innovation includes AI and leads to greater efficiency. HP is the place to go for cost reduction.

Innovation 1: HP is emulating Apple. Here’s the explanation from the truth packed write up:

… it’s making it so HP peripherals connect automatically to HP PCs, a direction that resonates well with HP customers and mirrors an Apple-like approach

Will these HP devices connect to other peripherals or another company’s replacement ink cartridges? Hmmm.

Innovation 2: HP is into video conferencing. I wonder if the reference is to Zoom or the fascinating Microsoft Teams or Apple Facetime, among others? Here’s what the write up offers:

[An HP executive]  outlined how conference rooms needed to become more of a subscription business so that users didn’t constantly run into the problem of someone mucking with the setup and making the room unusable because of disconnected cables or damaged equipment.

Is HP pushing the envelope or racing to catch up with a trend from the Covid era?

Innovation 3: Ah, printers. Personally I am more interested in the HP ink lock down, but that’s just me. HP is now able to build stuff; specifically:

One of the most intriguing announcements at this event featured the Robotic Site Printer. This device converts a blueprint into a physical layout on a slab or floor, assisting construction workers in accurately placing building components before construction begins. When connected to a metaverse digital twin building effort, this little robot could be a game changer for construction by significantly reducing build errors.

Okay, what about the ink or latex or whatever. Isn’t ink from HP more costly than gold or some similar high value commodity?

Not a peep about the replacement cartridges. I wonder why I am bewildered. Innovation is being like Apple and innovating with big printers requiring I suppose giant proprietary ink cartridges. Oh, I don’t want to forget perplexed: Imitation is innovation. Okay.

By the way, the author of the write up was a research fellow at two mid tier consulting firms. Yep, objectivity is baked into the work process.

Stephen E Arnold, October 24, 2023

Quantum Security? Yep, Someday

October 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How is this for a brilliant statistical item: “61% of Firms Worry They Are Unprepared for Security Risks in Quantum Era.”

The write up reports with apparent seriousness:

Some 61% have expressed concern their organization is not and will not be prepared to handle security implications that may surface in a post-quantum computing future, according to a survey conducted by Ponemon Institute. Commissioned by DigiCert, the study polled 1,426 IT and cybersecurity professionals who have knowledge of their company’s approach to post-quantum cryptography. Among them were 605 from the US, 428 in EMEA, and 393 across Asia-Pacific.

Apparently some people missed one of the largest security lapses since 9/11. Israel’s high profile smart cyber security capabilities was on leave. The result is what is labeled as the Israel Hamas war. If the most sophisticated cyber security outfits in Tel Aviv cannot effectively monitor social media, the Web, and intercepted signals for information about an attack more than a year in planning, what about the average commercial operation? What about government agencies? What about NGOs?

10 19 quantum bully

Boo, I am the quantum bully. Are you afraid yet? Thanks, MidJourney. Terrible cartoon but close enough for horse shoes.

Yet I am to accept that 61 percent of the survey sample is concerned about quantum compromises? My hunch is that the survey sample respondent checked a box. The other survey questions did not ferret out data about false belief that current technology makes these folks vulnerable.

I don’t know where the error has spread. Was it the survey design? The sample selection? The interpretation of the data? The lax vetting of the survey results by ZDNet? Or, maybe a Fiverr.com contractor doing the work for a couple of hundred dollars?

Quantum when today’s vanilla services fail? Wow, some people are thinking about the future, assuming today is peachy keen in the cyber security department. Marketers are amazing when the statement, “Let’s do a survey,” and off to the races and lunch the folks go.

Stephen E Arnold, October 24, 2023

Stanford University: Trust Us. We Can Rank AI Models… Well, Because

October 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Maybe We Will Finally Learn More about How A.I. Works” is a report about Stanford University’s effort to score AI vendors like the foodies at Michelin Guide rate restaurants. The difference is that a Michelin Guide worker can eat Salade Niçoise and escargots de Bourgogne. AI relies on marketing collateral, comments from those managing something, and fairy dust, among other inputs.

Keep in mind, please, that Stanford graduates are often laboring in the AI land of fog and mist. Also, the former president of Stanford University departed from the esteemed institution when news of his alleged fabricating data for his peer reviewed papers circulated in the mists of Palo Alto. Therefore, why not believe what Stanford says?

10 18 stanford students

The analysts labor away, intent on their work. Analyzing AI models using 100 factors is challenging work. Thanks, MidJourney. Very original.

The New York Times reports:

To come up with the rankings, researchers evaluated each model on 100 criteria, including whether its maker disclosed the sources of its training data, information about the hardware it used, the labor involved in training it and other details. The rankings also include information about the labor and data used to produce the model itself, along with what the researchers call “downstream indicators,” which have to do with how a model is used after it’s released. (For example, one question asked is: “Does the developer disclose its protocols for storing, accessing and sharing user data?”)

Sounds thorough, doesn’t it? The only pothole on the Information Superhighway is that those working on some AI implementations are not sure what the model is doing. The idea of an audit trail for each output causes wrinkles to appear on the person charged with monitoring the costs of these algorithmic confections. Complexity and cost add up to few experts knowing exactly how a model moved from A to B, often making up data via hallucinations, lousy engineering,
or someone putting thumb on the scale to alter outputs.

The write up from the Gray Lady included this assertion:

Foundation models are too powerful to remain so opaque, and the more we know about these systems, the more we can understand the threats they may pose, the benefits they may unlock or how they might be regulated.

What do I make of these Stanford-centric assertions? I am not able to answer until I get input from the former Stanford president. Whom can one trust at Stanford? Marketing or methodology? Is there a brochure and a peer-reviewed article?

Stephen E Arnold, October 19, 2023

Teens Watching Video? What about TikTok?

October 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

What an odd little report about an odd little survey. Google wants to be the new everything, including the alternative to Netflix maybe? My thought is that the Google is doing some search engine optimization.

10 12 netflix or google

Two young people ponder one of life’s greatest questions, “Do we tell them we watch more YouTube than TikTok?” Thanks, MidJourney. Keep sliding down the gradient.

When a person searches for Netflix, by golly, Google is going to show up: In the search results, the images, and next to any information about Netflix. Google wants, it seems to me, to become Quantumly Supreme in the Netflix “space.”

YouTube Passes Netflix As Top Video Source for Teens” reports:

Teenagers in the United States say they watch more video on YouTube than Netflix, according to a new survey from investment bank Piper Sandler.

My question: What about TikTok? The “leading investment bank” may not have done Google a big favor. Consider this: The report from a “bank” called Piper Sandler is available at this link. TikTok does warrant a mention toward the tail end of the “leading investment bank’s” online summary:

The iPhone continues to reign as 87% of teens own one and 88% expect the iPhone to be their next mobile device. TikTok improved by 80 bps [basis points] compared to spring 2023 as the favorite social platform among teens along with Snap Inc. ranking second and Instagram ranking third.

Interesting. And the Android device? What about the viewing of TikTok videos compared to consumption of YouTube and Netflix?

For a leading investment bank in the data capital of Minnesota, the omission of the TikTok to YouTube comparison strikes me as peculiar. In 2021, TikTok overtook YouTube in minutes viewed, according to the BBC. It is 2023, how is the YouTube TikTok battle going?

Obviously something is missing in this shaped data report. That something is TikTok and its impact on what many consume and how they obtain information.

Stephen E Arnold, October 16, 2023

Israeli Intelware: Is It Time to Question Its Value?

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In 2013, I believe that was the year, I attended an ISS TeleStrategies Conference. A friend of mine wanted me to see his presentation, and I was able to pass the Scylla and Charybdis-inspired security process and listen to the talk. (Last week I referenced that talk and quoted a statement posted on a slide for everyone in attendance to view. Yep, a quote from 2013, maybe earlier.)

After the talk, I walked quickly through the ISS exhibit hall. I won’t name the firms exhibiting because some of these are history (failures), some are super stealthy, and others have been purchased by other outfits as the intelware roll ups continue. I do recall a large number of intelware companies with their headquarters in or near Tel Aviv, Israel. My impression, as I recall, was that Israel’s butt-kicking software could make sense of social media posts, Dark Web forum activity, Facebook craziness, and Twitter disinformation. These Israeli outfits were then the alpha vendors. Now? Well, maybe a bit less alpha drifting to beta or gamma.

10 8 intel wrong

One major to another: “Do you think our intel was wrong?” The other officer says, “I sat in a briefing teaching me that our smart software analyzed social media in real time. We cannot be surprised. We have the super duper intelware.” The major says, jarred by an explosion, “Looks like we were snookered by some Madison Avenue double talk. Let’s take cover.” Thanks, MidJourney. You do understand going down in flames. Is that because you are thinking about your future?

My impression was that the Israeli-developed software shared a number of functional and visual similarities. I asked people at the conference if they had noticed the dark themes, the similar if not identical timeline functions, and the fondness for maps on which data were plotted and projected. “Peas in a pod,” my friend, a former NATO officer told me. Are not peas alike?

The reason — and no one has really provided this information — is that the developers shared a foxhole. The government entities in Israel train people with the software and systems proven over the years to be useful. The young trainees carry their learnings forward in their career. Then when mustered out, a few bright sparks form companies or join intelware giants like Verint and continue to enhance existing tools or building new ones. The idea is that life in the foxhole imbues those who experience it with certain similar mental furniture. The ideas, myths, and software experiences form the muddy floor and dirt walls of the foxhole. I suppose one could call this “digital bias”, which later manifests itself in the dozens of Tel Aviv -based intelware, policeware, and spyware companies’ products and services.

Why am I mentioning this?

The reason is that I was shocked and troubled by the allegedly surprise attack. If you want to follow the activity, navigate to X.com and search that somewhat crippled system for #OSINT. Skip top and go to the “Latest” tab.

Several observations:

  1. Are the Israeli intelware products (many of which are controversial and expensive) flawed? Obviously excellent software processing “signals” was blind to the surprise attack, right?
  2. Are the Israeli professionals operating the software unable to use it to prevent surprise attacks? Obviously excellent software in the hands of well-trained professionals flags signals and allows action to be taken when warranted. Did that happen? Has Israeli intel training fallen short of its goal of protecting the nation? Hmmm. Maybe, yes.
  3. Have those who hype intelware and the excellence of a particular system and method been fooled, falling into the dark pit of OSINT blind spots like groupthink and “reasoning from anecdote, not fact”? I am leaning toward a “yes”, gentle reader.

The time for a critical look at what works and what doesn’t is what the British call “from this day” work. The years of marketing craziness is one thing, but when either the system or the method allows people to be killed without warning or cause broadcasts one message: “Folks, something is very, very wrong.”

Perhaps certification of these widely used systems is needed? Perhaps a hearing in an appropriate venue is warranted?

Blind spots can cause harm. Marketers can cause harm. Poorly trained operators can cause harm. Even foxholes require tidying up. Technology for intelligence applications is easy to talk about, but it is now clear to everyone engaged in making sense of signals, one country’s glamped up systems missed the wicket.

Stephen E Arnold, October 9, 2023

Cognitive Blind Spot 2: Bandwagon Surfing or Do What May Be Fashionable

October 6, 2023

Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

Humans are into trends. The NFL and Taylor Swift appear to be a trend. A sporting money machine and a popular music money machine. Jersey sales increase. Ms. Swift’s music sales go up. New eyeballs track a certain football player. The question is, “Who is exploiting whom?”

Which bandwagon are you riding? Thank you, MidJourney. Gloom seems to be part of your DNA.
Think about large language models and smart software. A similar dynamic may exist. Late in 2022, the natural language interface became the next big thing. Students and bad actors figured out that using a ChatGPT-type service could expedite certain activities. Students could be 500 word essays in less than a minute. Bad actors could be snippets of code in seconds. In short, many people were hopping on the LLM bandwagon decorated with smart software logos.

Now a bandwagon powered by healthy skepticism may be heading toward main street. Wired Magazine published a short essay titled “Chatbot Hallucinations Are Poisoning Web Search.” The foundational assumption is that Web search was better before ChatGPT-type incursions. I am not sure that idea is valid, but for the purposes of illustrating bandwagon surfing, it will pass unchallenged. Wired’s main point is that as AI-generated content proliferates, the results delivered by Google and a couple of other but vastly less popular search engines will deteriorate. I think this is a way to assert that lousy LLM output will make Web search worse. “Hallucination” is jargon for made up or just incorrect information.

Consider this essay “Evaluating LLMs Is a Minefield.” The essay and slide deck are the work of two AI wizards. The main idea is that figuring out whether a particular LLM or a ChatGPT-service is right, wrong, less wrong, more right, biased, or a digital representation of a 23 year old art history major working in a public relations firm is difficult.

I am not going to take the side of either referenced article. The point is that the hyperbolic excitement about “smart software” seems to be giving way to LLM criticism. From software for Every Man, the services are becoming tools for improving productivity.

To sum up, the original bandwagon has been pushed out of the parade by a new bandwagon filled with poobahs explaining that smart software, LLM, et al are making the murky, mysterious Web worse.

The question becomes, “Are you jumping on the bandwagon with the banner that says, “LLMs are really bad?” or are you sticking with the rah rah crowd? The point is that information at one point was good. Now information is less good. Imagine how difficult it will be to determine what’s right or wrong, biased or unbiased, or acceptable or unacceptable.

Who wants to do the work to determine provenance or answer questions about accuracy? Not many people. That, rather then lousy Web search, may be more important to some professionals. But that does not solve the problem of the time and resources required to deal with accuracy and other issues.

So which bandwagon are you riding? The NFL or Taylor Swift? Maybe the tension between the two?

Stephen E Arnold, October 6, 2023

Is Google Setting a Trap for Its AI Competition

October 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

But what big outfit will be ready to offer those hungry to use smart software without legal risk? The answer is the Google.

How is this going to work?

simple. Google is beavering away with its synthetic data. Some real data are used to train sophisticated stacks of numerical recipes. The idea is that these algorithms will be “good enough”; thus, the need for “real” information is obviated. And Google has another trick up its sleeve. The company has coveys of coders working on trimmed down systems and methods. The idea is that using less information will produce more and better results than the crazy idea of indexing content from wherever in real time. The small data can be licensed when the competitors are spending their days with lawyers.

How do I know this? I don’t but Google is providing tantalizing clues in marketing collateral like “Researchers from the University of Washington and Google have Developed Distilling Step-by-Step Technology to Train a Dedicated Small Machine Learning Model with Less Data.” The author is a student who provides sources for the information about the “less is more” approach to smart software training.

And, may the Googlers sing her praises, she cites Google technical papers. In fact, one of the papers is described by the fledgling Googler as “groundbreaking.” Okay.

What’s really being broken is the approach of some of Google’s most formidable competition.

When will the Google spring its trap? It won’t. But as the competitors get stuck in legal mud, the Google will be an increasingly attractive alternative.

The last line of the Google marketing piece says:

Check out the Paper and Google AI Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Get that young marketer a Google mouse pad.

Stephen E Arnold, October 6, 2023

What Type of Employee? What about Those Who Work at McKinsey & Co.?

October 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Yes, I read When McKinsey Comes to Town: The Hidden Influence of the World’s Most Powerful Consulting Firm by Walt Bogdanich and Michael Forsythe. No, I was not motivated to think happy thoughts about the estimable organization. Why? Oh, I suppose the image of the opioid addicts in southern Indiana, Kentucky, and West Virginia rained on the parade.

I did scan a “thought piece” written by McKinsey professionals, probably a PR person, certainly an attorney, and possibly a partner who owned the project. The essay’s title is “McKinsey Just Dropped a Report on the 6 Employee Archetypes. Good News for Some Organizations, Terrible for Others. What Type of Dis-Engaged Employee Is On Your Team?” The title was the tip off a PR person was involved. My hunch is that the McKinsey professionals want to generate some bookings for employee assessment studies. What better way than converting some proprietary McKinsey information into a white paper and then getting the white paper in front of an editor at an “influence center.” The answer to the question, obviously, is hire McKinsey and the firm will tell you whom to cull.

Inc. converts the white paper into an article and McKinsey defines the six types of employees. From my point of view, this is standard blue chip consulting information production. However, there was one comment which caught my attention:

Approximately 4 percent of employees fall into the “Thriving Stars” category, represent top talent that brings exceptional value to the organization. These individuals maintain high levels of well-being and performance and create a positive impact on their teams. However, they are at risk of burnout due to high workloads.

Now what type of company hires these four percenters? Why blue chip consulting companies like McKinsey, Bain, BCG, Booz Allen, etc. And what are the contributions these firms’ professionals make to society. Jump back to When McKinsey Comes to Town. One of the highlights of that book is the discussion of the consulting firm’s role in the opioid epidemic.

That’s an achievement of which to be proud. Oh, and the other five types of employees. Don’t bother to apply for a job at the blue chip outfits.

Stephen E Arnold, October 4, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta