Perfect for Spying, Right?

June 28, 2024

And we thought noise-cancelling headphones were nifty. The University of Washington’s UW News announces “AI Headphones Let Wearer Listen to a Single Person in a Crowd, by Looking at them Just Once.” That will be a real help for the hard-of-hearing. Also spies. Writers Stefan Milne and Kiyomi Taguchi explain:

“A University of Washington team has developed an artificial intelligence system that lets a user wearing headphones look at a person speaking for three to five seconds to ‘enroll’ them. The system, called ‘Target Speech Hearing,’ then cancels all other sounds in the environment and plays just the enrolled speaker’s voice in real time even as the listener moves around in noisy places and no longer faces the speaker. … To use the system, a person wearing off-the-shelf headphones fitted with microphones taps a button while directing their head at someone talking. The sound waves from that speaker’s voice then should reach the microphones on both sides of the headset simultaneously; there’s a 16-degree margin of error. The headphones send that signal to an on-board embedded computer, where the team’s machine learning software learns the desired speaker’s vocal patterns. The system latches onto that speaker’s voice and continues to play it back to the listener, even as the pair moves around. The system’s ability to focus on the enrolled voice improves as the speaker keeps talking, giving the system more training data.”

If the sound quality is still not satisfactory, the user can refresh enrollment to improve clarity. Though the system is not commercially available, the code used for the prototype is available for others to tinker with. It is built on last year’s “semantic hearing” research by the same team. Target Speech Hearing still has some limitations. It does not work if multiple loud voices are coming from the target’s direction, and it can only eavesdrop on, er, listen to one speaker at a time. The researchers are now working on bringing their system to earbuds and hearing aids.

Cynthia Murrell, June 28, 2024

Chasing a Folly: Identifying AI Content

June 24, 2024

As are other academic publishers, Springer Nature Group is plagued by fake papers. Now the company announces, “Springer Nature Unveils Two New AI Tools to Protect Research Integrity.” How effective the tools are remains to be proven, but at least the company is making an effort. The press release describes text-checker Geppetto and image-analysis tool SnappShot. We learn:

“Geppetto works by dividing the paper up into sections and uses its own algorithms to check the consistency of the text in each section. The sections are then given a score based on the probability that the text in them has been AI generated. The higher the score, the greater the probability of there being problems, initiating a human check by Springer Nature staff. Geppetto is already responsible for identifying hundreds of fake papers soon after submission, preventing them from being published – and from taking up editors’ and peer reviewers’ valuable time.

SnappShot, also developed in-house, is an AI-assisted image integrity analysis tool. Currently used to analyze PDF files containing gel and blot images and look for duplications in those image types – another known integrity problem within the industry – this will be expanded to cover additional image types and integrity problems and speed up checks on papers.”

Springer Nature’s Chris Graf emphasizes the importance of research integrity and vows to continue developing and improving in-house tools. To that end, we learn, the company is still growing its fraud-detection team. The post points out Springer Nature is a contributing member of the STM Integrity Hub.

Based in Berlin, Springer Nature was formed in 2015 through the combination of Nature Publishing Group, Macmillan Education, and Springer Science+Business Media. A few of its noteworthy publications include Scientific American, Nature, and this collection of Biology, Clinical Medicine, and Health journals.

Cynthia Murrell, June 24, 2024

Detecting AI-Generated Research Increasingly Difficult for Scientific Journals

June 12, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Reputable scientific journals would like to only publish papers written by humans, but they are finding it harder and harder to enforce that standard. Researchers at the University of Chicago Medical Center examined the issue and summarize their results in, “Detecting Machine-Written Content in Scientific Articles,” published at Medical Xpress. Their study was published in Journal of Clinical Oncology Clinical Cancer Informatics on June 1. We presume it was written by humans.

The team used commercial AI detectors to evaluate over 15,000 oncology abstracts from 2021-2023. We learn:

“They found that there were approximately twice as many abstracts characterized as containing AI content in 2023 as compared to 2021 and 2022—indicating a clear signal that researchers are utilizing AI tools in scientific writing. Interestingly, the content detectors were much better at distinguishing text generated by older versions of AI chatbots from human-written text, but were less accurate in identifying text from the newer, more accurate AI models or mixtures of human-written and AI-generated text.”

Yes, that tracks. We wonder if it is even harder to detect AI generated research that is, hypothetically, run through two or three different smart rewrite systems. Oh, who would do that? Maybe the former president of Stanford University?

The researchers predict:

“As the use of AI in scientific writing will likely increase with the development of more effective AI language models in the coming years, Howard and colleagues warn that it is important that safeguards are instituted to ensure only factually accurate information is included in scientific work, given the propensity of AI models to write plausible but incorrect statements. They also concluded that although AI content detectors will never reach perfect accuracy, they could be used as a screening tool to indicate that the presented content requires additional scrutiny from reviewers, but should not be used as the sole means to assess AI content in scientific writing.”

That makes sense, we suppose. But humans are not perfect at spotting AI text, either, though there are ways to train oneself. Perhaps if journals combine savvy humans with detection software, they can catch most AI submissions. At least until the next generation of ChatGPT comes out.

Cynthia Murrell, June 12, 2024

Will AI Kill Us All? No, But the Hype Can Be Damaging to Mental Health

June 11, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I missed the talk about how AI will kill us all. Planned? Nah, heavy traffic. From what I heard, none of the cyber investigators believed the person trying hard to frighten law enforcement cyber investigators. There are other — slightly more tangible threats. One of the attendees whose name I did not bother to remember asked me, “What do you think about artificial intelligence?” My answer was, “Meh.”

image

A contrarian walks alone. Why? It is hard to make money being negative. At the conference I attended June 4, 5, and 6, attendees with whom I spoke just did not care. Thanks, MSFT Copilot. Good enough.

Why you may ask? My method of handling the question is to refer to articles like this: “AI Appears to Rapidly Be Approaching Be Approaching a Brick Wall Where It Can’t Get Smarter.” This write up offers an opinion not popular among the AI cheerleaders:

Researchers are ringing the alarm bells, warning that companies like OpenAI and Google are rapidly running out of human-written training data for their AI models. And without new training data, it’s likely the models won’t be able to get any smarter, a point of reckoning for the burgeoning AI industry

Like the argument that AI will change everything, this claim applies to systems based upon indexing human content. I am reasonably certain that more advanced smart software with different concepts will emerge. I am not holding my breath because much of the current AI hoo-hah has been gestating longer than new born baby elephant.

So what’s with the doom pitch? Law enforcement apparently does not buy the idea. My team doesn’t. For the foreseeable future, applied smart software operating within some boundaries will allow some tasks to be completed quickly and with acceptable reliability.  Robocop is not likely for a while.

One interesting question is why the polarization. First, it is easy. And, second, one can cash in. If one is a cheerleader, one can invest in a promising AI start and make (in theory) oodles of money. By being a contrarian, one can tap into the segment of people who think the sky is falling. Being a contrarian is “different.” Plus, by predicting implosion and the end of life one can get attention. That’s okay. I try to avoid being the eccentric carrying a sign.

The current AI bubble relies in a significant way on a Google recipe: Indexing text. The approach reflects Google’s baked in biases. It indexes the Web; therefore, it should be able to answer questions by plucking factoids. Sorry, that doesn’t work. Glue cheese to pizza? Sure.

Hopefully new lines of investigation may reveal different approaches. I am skeptical about synthetic (or made up data that is probably correct). My fear is that we will require another 10, 20, or 30 years of research to move beyond shuffling content blocks around. There has to be a higher level of abstraction operating. But machines are machines and wetware (human brains) are different.

Will life end? Probably but not because of AI unless someone turns over nuclear launches to “smart” software. In that case, the crazy eccentric could be on the beam.

Stephen E Arnold, June 11, 2024

A Cultural Black Hole: Lost Data

May 22, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

A team in Egypt discovered something mysterious near the pyramids. I assume National Geographic will dispatch photographers. Archeologists will probe. Artifacts will be discovered. How much more is buried under the surface of Giza? People have been digging for centuries, and their efforts are rewarded. But what about the artifacts of the digital age?

image

Upon opening the secret chamber, the digital construct explains to the archeologist from the future that there is a little problem getting the digital information. Thanks, MSFT Copilot.

My answer is, “Yeah, good luck.” The ephemeral quality of online information means that finding something buried near the pyramid of Djoser is going to be more rewarding than looking for the once findable information about MIC, RAC, and ZPIC on a US government Web site.  The same void exists for quite a bit of human output captured in now-disappeared systems like The Point (Top 5% of the Internet) and millions of other digital constructs.

A survey report conducted by the Pew Research Center highlights link rot. The idea is simple. Click on a link and the indexed or pointed to content cannot be found. “When Online Content Disappears” has a snappy subtitle:

38 percent of Web pages that existed in 2013 are no longer accessible a decade later.

Wait, are national libraries like the Library of Congress supposed to keep “information.” What about the National Archives? What about the Internet Archive (an outfit busy in court)? What about the Google? (That’s the “all” the world’s information, right?) What about Bibliothèque nationale de France with its rich tradition of keeping French information?

News flash. Unlike the fungible objects unearthed in Egypt, data archeologists are going to have to buy old hard drives on eBay, dig through rubbish piles in “recycling” facilities, or scour yard sales for old machines. Then one has to figure out how to get the data. Presumably smart software can filter through the bits looking for useful data. My suggestion? Don’t count on this happening?

Here are several highlights from the Pew Report:

  • Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.
  • Nearly one-in-five tweets are no longer publicly visible on the site just months after being posted.
  • 21% of all the government webpages we examined contained at least one broken link… Across every level of government we looked at, there were broken links on at least 14% of pages; city government pages had the highest rates of broken links.

The report presents a picture of lost data. Trying to locate these missing data will be less fruitful than digging in the sands of Egypt.

The word “rot” is associated with decay. The concept of “link rot” complements the business practices of government agencies and organizations once gathering, preserving, and organizing data. Are libraries at fault? Are regulators the problem? Are the content creators the culprits?

Sure, but the issue is that as the euphoria and reality of digital information slosh like water in a swimming pool during an earthquake, no one knows what to do. Therefore, nothing is done until knee jerk reflexes cause something to take place. In the end, no comprehensive collection plan is in place for the type of information examined by the Pew folks.

From my vantage point, online and digital information are significant features of life today. Like goldfish in a bowl, we are not able to capture the outputs of the digital age. We don’t understand the datasphere, my term for the environment in which much activity exists.

The report does not address the question, “So what?”

That’s part of the reason future data archeologists will struggle. The rush of zeros and ones has undermined information itself. If ignorance of these data create bliss, one might say, “Hello, Happy.”

Stephen E Arnold, May 22, 2023

E2EE: Not Good Enough. So What Is Next?

May 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

What’s wrong with software? “

I think one !*#$ thing about the state of technology in the world today is that for so many people, their job, and therefore the thing keeping a roof over their family’s head, depends on adding features, which then incentives people to, well, add features. Not to make and maintain a good app.

image

Who has access to the encrypted messages? Someone. That’s why this young person is distraught as she is escorted to the police van. Thanks, MSFT Copilot. Good enough.

This statement appears in “A Rant about Phone Messaging Apps UI.” But there are some more interesting issues in messaging; specifically, E2EE or end to end encrypted messaging. The current example of talking about the wrong topic in a quite important application space is summarized in Business Insider, an estimable online publication with snappy headlines like this one: “”In the Battle of Telegram vs Signal, Elon Musk Casts Doubt on the Security of the App He Once Championed.” That write up reports as “real” news:

Signal has also made its cryptography open-source. It is widely regarded as a remarkably secure way to communicate, trusted by Jeff Bezos and Amazon executives to conduct business privately.

I want to point out that Edward Snowden “endorses” Signal. He does not use Telegram. Does he know something that others may not have tucked into their memory stack?

The Business Insider “real” news report includes this quote from a Big Dog at Signal:

“We use cryptography to keep data out of the hands of everyone but those it’s meant for (this includes protecting it from us),” Whittaker wrote. “The Signal Protocol is the gold standard in the industry for a reason–it’s been hammered and attacked for over a decade, and it continues to stand the test of time.”

Pavel Durov, the owner of Telegram, and the brother of the person like two Ph.D.’s (his brother Nikolai), suggests that Signal is insecure. Keep in mind that Mr. Durov has been the subject of some scrutiny because after telling the estimable Tucker Carlson that Telegram is about free speech. Why? Telegram blocked Ukraine’s government from using a Telegram feature to beam pro-Ukraine information into Russia. That’s a sure-fire way to make clear what country catches Mr. Durov’s attention. He did this, according to rumors reaching me from a source with links to the Ukraine, because Apple or maybe Google made him do it. Blaming the alleged US high-tech oligopolies is a good red herring and a sinky one at that.

What Telegram got to do with the complaint about “features”? In my view, Telegram has been adding features at a pace that is more rapid than Signal, WhatsApp, and a boatload of competitors. have those features created some vulnerabilities in the Telegram set up? In fact, I am not sure Telegram is a messaging platform. I also think that the company may be poised to do an end run around open sourcing its home-grown encryption method.

What does this mean? Here are a few observations:

  1. With governments working overtime to gain access to encrypted messages, Telegram may have to add some beef.
  2. Established firms and start ups are nosing into obfuscation methods that push beyond today’s encryption methods.
  3. Information about who is behind an E2EE messaging service is tough to obtain? What is easy to document with a Web search may be one of those “fake” or misinformation plays.

Net net: E2EE is getting long in the tooth. Something new is needed. If you want to get a glimpse of the future, catch my lecture about E2EE at the upcoming US government Cycon 2024 event in September. Want a preview? We have a briefing. Write benkent2020 at yahoo dot com for restrictions and prices.

Stephen E Arnold, May 21, 2024

AI and the Workplace: Change Will Happen, Just Not the Way Some Think

May 15, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “AI and the Workplace.” The essay contains observations related to smart software in the workplace. The idea is that employees who are savvy will experiment and try to use the technology within today’s work framework. I think that will happen just as the essay suggests. However, I think there is a larger, more significant impact that is easy to miss. Looking at today’s workplace is missing a more significant impact. Employees either [a] want to keep their job, [b] gain new skills and get a better job, or [c] quit to vegetate or become an entrepreneur. I understand.

The data in the report make clear that some employees are what I call change flexible; that is, these motivated individuals differentiate from others at work by learning and experimenting. Note that more than half the people in the “we don’t use AI” categories want to use AI.

image

These data come from the cited article and an outfit called Asana.

The other data in the report. Some employees get a productivity boost; others just chug along, occasionally getting some benefit from AI. The future, therefore, requires learning, double checking outputs, and accepting that it is early days for smart software. This makes sense; however, it misses where the big change will come.

In my view, the major shift will appear in companies founded now that AI is more widely available. These organizations will be crafted to make optimal use of smart software from the day the new idea takes shape. A new news organization might look like Grok News (the Elon Musk project) or the much reviled AdVon. But even these outfits are anchored in the past. Grok News just substitutes smart software (which hopefully will not kill its users) for old work processes and outputs. AdVon was a “rip and replace” tool for Sports Illustrated. That did not go particularly well in my opinion.

The big job impact will be on new organizational set ups with AI baked in. The types of people working at these organizations will not be from the lower 98 percent of the work force pool. I think the majority of employees who once expected to work in information processing or knowledge work will be like a 58 year old brand manager at a vape company. Job offers will not be easy to get and new companies might opt for smart software and search engine optimization marketing. How many workers will that require? Maybe zero. Someone on Fiverr.com will do the job for a couple of hundred dollars a month.

In my view, new companies won’t need workers who are not in the top tier of some high value expertise. Who needs a consulting team when one bright person with knowledge of orchestrating smart software is able to do the work of a marketing department, a product design unit, and a strategic planning unit? In fact, there may not be any “employees” in the sense of workers at a warehouse or a consulting firm like Deloitte.

Several observations are warranted:

  1. Predicting downstream impacts of a technology unfamiliar to a great many people is tricky and sometimes impossible. Who knew social media would spawn a renaissance in getting tattooed?
  2. Visualizing how an AI-centric start up is assembled is a challenge? I submit it won’t look like an insurance company today. What’s a Tesla repair station look like? The answer, “Not much.”
  3. Figuring out how to be one of the elite who gets a job means being perceived as “smart.” Unlike Alina Habba, I know that I cannot fake “smart.” How many people will work hard to maximize the return on their intelligence? The answer, in my experience, is, “Not too many, dinobaby.”

Looking at the future from within the framework of today’s datasphere distorts how one perceives impact. I don’t know what the future looks like, but it will have some quite different configurations than the companies today have. The future will arrive slowly and then it becomes the foundation of a further evolution. What’s the grandson of tomorrow’s AI firm look like? Beauty will be in the eye of the beholder.

Net net: Where will the never-to-be-employed find something meaningful to do?

Stephen E Arnold, May 15, 2024

AI May Help Real Journalists Explain Being Smart. May, Not Will

May 9, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I found the link between social media and stupid people interesting. I am not sure I embrace the causal chain as presented in “As IQ Scores Decline in the US, Experts Blame the Rise of Tech — How Stupid Is Your State?” The “real” news story has a snappy headline, but social media and IQ? Let’s take a look. The write up states:

Here’s the first sentence of the write up. Note the novel coinage, dumbening. I assume the use of dumb as a gerund open the door to such statements as “I dumb” or “We dumbed together at Harvard’s lecture about ethics” or “My boss dumbed again, like he did last summer.”

Do all Americans go through a process of dumbening?

image

A tour group has a low IQ when it comes to understanding ancient rock painting. Should we blame technology and social media? Thanks, MSFT Copilot. Earning extra money because you do great security?

The write up explains that IQ scores are going down after a “rise” which began in 1905. What causes this decline? Is it broken homes? Lousy teachers? A lack of consequences for inattentiveness? Skipping school? Crappy pre-schools? Bus rides? School starting too early or too late? Dropping courses in art, music, and PE? Chemical-infused food? Television? Not learning cursive?

The answer is, “Technology.” More specifically, the culprit is social media. The article quotes a professor, who opines:

The professor [Hetty Roessingh, professor emerita of education at the University of Calgary] said that time spent with devices like phones and iPads means less time for more effective methods of increasing one’s intelligence level.

Several observations:

  1. Wow.
  2. Technology is an umbrella term. Social media is an umbrella term. What exactly is causing people to be dumb?
  3. What about an IQ test being mismatched to those who take it? My IQ was pretty low when I lived in Campinas, Brazil. It was tough to answer questions I could not read until I learned Portuguese.

Net net: Dumbening. You got it.

Stephen E Arnold, May 9, 2024

A High-Tech Best Friend and Campfire Lighter

May 1, 2024

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A dog is allegedly man’s best friend. I  have a French bulldog,

tibby asleep 3

and I am not 100 percent sure that’s an accurate statement. But I have a way to get the pal I have wanted for years.

 Ars Technica reports “You Can Now Buy a Flame-Throwing Robot Dog for Under $10,000” from Ohio-based maker Throwflame. See the article for footage of this contraption setting fire to what appears to be a forest. Terrific. Reporter Benj Edwards writes:

“Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone. It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration. The company lists possible applications of the new robot as ‘wildfire control and prevention,’ ‘agricultural management,’ ‘ecological conservation,’ ‘snow and ice removal,’ and ‘entertainment and SFX.’ But most of all, it sets things on fire in a variety of real-world scenarios.”

And what does my desired dog look like? The GenY Tibby asleep at work? Nope.

image

I hope my Thermonator includes an AI at the controls. Maybe that will be an add-on feature in 2025? Unitree, maker of the robot base mentioned above, once vowed to oppose the weaponization of their products (along with five other robotics firms.) Perhaps Throwflame won them over with assertions their device is not technically a weapon, since flamethrowers are not considered firearms by federal agencies. It is currently legal to own this mayhem machine in 48 states. Certain restrictions apply in Maryland and California. How many crazies can get their hands on a mere $9,420 plus tax for that kind of power? Even factoring in the cost of napalm (sold separately), probably quite a few.

Cynthia Murrell, May 1, 2024

Research into Baloney Uses Four Letter Words

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am critical of university studies. However, I spotted one which strikes as the heart of the Silicon Valley approach to life. “Research Shows That People Who BS Are More Likely to Fall for BS” has an interesting subtitle; to wit:

People who frequently mislead others are less able to distinguish fact from fiction, according to University of Waterloo researchers

image

A very good looking bull spends time reviewing information helpful to him in selling his artificial intelligence system. Unlike the two cows, he does not realize that he is living in a construct of BS. Thanks, MSFT Copilot. How are you doing with those printer woes today? Good enough, I assume.

Consider the headline in the context of promises about technologies which will “change everything.” Examples range from the marvels of artificial intelligence to the crazy assertions about quantum computing. My hunch is that the reason baloney has become one of the most popular mental foods in the datasphere is that people desperately want a silver bullet. Other know that if a silver bullet is described with appropriate language and a bit of sizzle, the thought can be a runway for money.

What’s this mean? We have created a culture in North America that makes “technology” and “glittering generalities” into hyperbole factories.  Why believe me? Let’s look at the “research.”

The write up reports:

People who frequently try to impress or persuade others with misleading exaggerations and distortions are themselves more likely to be fooled by impressive-sounding misinformation… The researchers found that people who frequently engage in “persuasive bullshitting” were actually quite poor at identifying it. Specifically, they had trouble distinguishing intentionally profound or scientifically accurate fact from impressive but meaningless fiction. Importantly, these frequent BSers are also much more likely to fall for fake news headlines.

Let’s think about this assertion. The technology story teller is an influential entity. In the world of AI, for example, some firms which have claimed “quantum supremacy” showcase executives who spin glorious word pictures of smart software reshaping the world. The upsides are magnetic; the downsides dismissed.

What about crypto champions? Telegram, founded by two Russian brothers, are spinning fabulous tales of revenue from advertising in an encrypted messaging system and cheerleading for a more innovative crypto currency. Operating from Dubai, there are true believers. What’s not to like? Maybe these bros have the solution that has long been part of the Harvard winkle confections.

What shocked me about the write up was the use of the word “bullshit.” Here’s an example from the academic article:

“We found that the more frequently someone engages in persuasive bullshitting, the more likely they are to be duped by various types of misleading information regardless of their cognitive ability, engagement in reflective thinking, or metacognitive skills,” Littrell said. “Persuasive BSers seem to mistake superficial profoundness for actual profoundness. So, if something simply sounds profound, truthful, or accurate to them that means it really is. But evasive bullshitters were much better at making this distinction.”

What if the write up is itself BS? What if the journal publishing the article — British Journal of Social Psychology — is BS? On one level, I want to agree that those skilled in the art of baloney manufacturing, distributing, and outputting have a quite specific skill. On the other hand, I admit that I cannot determine at first glance if the information provided is not synthetic, ripped off, shaped, or weaponized. I would assert that most people are not able to identify what is “verifiable”, “an accurate accepted fact”, or “true.”

We live in a post-reality era. When the presidents of outfits like Harvard and Stanford face challenges to their research accuracy, what can I do when confronted with a media release about BS. Upon reflection, I think the generalization that people cannot figure out what’s on point or not is true. When drug store cashiers cannot make change, I think that’s strong anecdotal evidence that other parts of their mental toolkit have broken or missing parts.

But the statement that those who output BS cannot themselves identify BS may be part of a broader educational failure. Lazy people, those who take short cuts, people who know how to do the PT Barnum thing, and sales professionals trying to close a deal reflect a societal issue. In a world of baloney, everything is baloney.

Stephen E Arnold, March 25, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta