New Learning Model Claims to Reduce Bias, Improve Accuracy

August 30, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Promises, promises. We have seen developers try and fail to eliminate bias in machine learning models before. Now ScienceDaily reports, “New Model Reduces Bias and Enhances Trust in AI Decision-Making and Knowledge Organization.” Will this effort by University of Waterloo researchers be the first to succeed? The team worked in a field where AI bias and inaccuracy can be most devastating: healthcare. The write-up tells us:

“Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes. This inherent bias and pattern entanglement leads to misdiagnoses and inequitable healthcare outcomes for specific patient groups. Thanks to new research led by Dr. Andrew Wong, a distinguished professor emeritus of systems design engineering at Waterloo, an innovative model aims to eliminate these barriers by untangling complex patterns from data to relate them to specific underlying causes unaffected by anomalies and mislabeled instances. It can enhance trust and reliability in Explainable Artificial Intelligence (XAI.)”

Wong states his team was able to disentangle statistics in a certain set of complex medical results data, leading to the development of a new XAI model they call Pattern Discovery and Disentanglement (PDD). The post continues:

“The PDD model has revolutionized pattern discovery. Various case studies have showcased PDD, demonstrating an ability to predict patients’ medical results based on their clinical records. The PDD system can also discover new and rare patterns in datasets. This allows researchers and practitioners alike to detect mislabels or anomalies in machine learning.”

If accurate, PDD could lead to more thorough algorithms that avoid hasty conclusions. Less bias and fewer mistakes. Can this ability to be extrapolated to other fields, like law enforcement, social services, and mortgage decisions? Assurances are easy.

Cynthia Murrell, August 30, 2023

AI Weird? Who Knew?

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Captain Obvious here. Today’s report comes from the IEEE, an organization for really normal people. Oh, you are not an electrical engineer? Then, you are not normal. Just ask an EE and inquire about normalcy?

Enough electrical engineer humor. Oh, well, one more: Which is a more sophisticated engineer? [a] Civil, [b] Mechanical, [c] Electrical, [d] Nuclear. The answer is [d] nuclear. Why? You have to be able to do math, chemistry, and fix a child’s battery powered toy. Get it? I must admit that I did not when Dr. James Terwilliger told it to me when I worked at the Halliburton nuclear outfit. Never heard of it? Well, there you go. Just ask a chatbot to fill you in.

I read “Why Today’s Chatbots Are Weird, Argumentative, and Wrong.” The IEEE article is going to create some tension in engineering-forward organizations. Most of these outfits are in the words of insightful leaders like the stars of the “All In” podcast. Booze, money, gambling, and confidence — a heady mixture indeed.

What does the write up say that Captain Obvious did not know? That’s a poor question. The answer is, “Not much.”

Here’s a passage which received the red marker treatment from this dinobaby:

[Generative AI services have] become way more fluent and more subtly wrong in ways that are harder to detect.

I love the “way more.” The key phrase in the extract, at least for me, is: “Harder to detect.” But why? Is it because developers are improving their generative systems a tweak and a human judgment at a time. The “detect” folks are in react mode. Does this suggest that at least for now the cat-and-mouse game ensures an advantage to the steadily improving generative systems. In simple terms, non-electrical engineers are going to be “subtly” fooled? It sure does.

A second example of my big Japanese chunky marker circling behavior is this snippet:

The problem is the answers do look vaguely correct. But [the chatbots] are making up papers, they’re making up citations or getting facts and dates wrong, but presenting it the same way they present actual search results. I think people can get a false sense of confidence on what is really just probability-based text.

Are you getting a sense that if a person who is not really informed about a topic will read baloney and perceive it as a truffle?

Captain Obvious is tired of this close reading game. For more AI insights, just navigate to the cited IEEE article. And be kind to electrical engineers. These individuals require respect and adulation. Make a misstep and your child’s battery powered toy will never emit incredibly annoying squeaks again.

Stephen E Arnold, August 29, 2023

Calls for AI Pause Futile At this Late Date

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Well, the nuclear sub has left the base. A group of technology experts recently called for a 6-month pause on AI rollouts in order to avoid the very “loss of control of our civilization” to algorithms. That might be a good idea—if it had a snowball’s chance of happening. As it stands, observes ComputerWorld‘s Rob Enderle, “Pausing AI Development Is a Foolish Idea.” We think foolish is not a sufficiently strong word. Perhaps regulation could have been established before the proverbial horse left the barn, but by now there are more than 500 AI startups according to Jason Calacanis, noted entrepreneur and promoter.

8 27 sdad sailor

A sad sailor watches the submarine to which he was assigned leave the dock without him. Thanks, MidJourney. No messages from Mother MJ on this image.

Enderle opines as a premier pundit:

“Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything. … There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market.”

We are reminded that even development on clones, which is illegal in most of the world, continues apace. The only thing bans seem to have accomplished there is to obliterate transparency around cloning projects. There is simply no way to rein in all the world’s scientists. Not yet. Enderle offers a grain of hope on artificial intelligence, however. He notes it is not too late to do for general-purpose AI what we failed to do for generative AI:

“General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount. Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons.”

So we have that to look forward to. And clones, apparently. The write-up points to initiatives already in the works to protect against “hostile” AI. Perhaps they will even be effective.

Cynthia Murrell, August 16, 2023

The Age of the Ideator: Go Fast, Ideate!

August 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “To De-Risk AI, the Government Must Accelerate Knowledge Production.” The essay introduces a word I am not sure I have seen before; that is, “ideator.” The meaning of an ideator, I think, is a human (not a software machine) able to produce people “who can have outsized impact on the world.” I think the author is referring to the wizard El Zucko (father of Facebook), the affable if mercurial Elon Musk, or the AI leaning Tim Apple. I am reasonably certain that the “outsized influence” moniker does not apply to the lip smacking Spanish football executive, Vlad Putin, or or similar go-getters.

8 28 share info you crazy

Share my information with a government agency. Are you crazy? asks the hard charging, Type A overachiever working wonders with smart software designed for autonomous weapons. Thanks, MidJourney. Not what I specified but close enough for horse shoes.

The pivotal idea is good for ideators. These individuals come up with ideas. These should be good ideas which flow from ideators of the right stripe. Solving problems requires information. Ideators like information, maybe crave it? The white hat ideators can neutralize non-white hat ideators. Therefore, white hat ideators need access to information. The non-white hat ideator won’t have a change. (No, I won’t ask, “What happens when a white hat ideator flips, changes to a non-white hat, and uses information in ways different from the white hat types’ actions?”)

What’s interesting about the essay is that the “fix” is to go fast when it comes to making information and then give the white hat folks access. To make the system work, a new government agency is needed. (I assume that the author is thinking about a US, Canadian, or Australian, or Western European government agency.)

That agency will pay the smart software outfits to figure out “AI alignment.” (I must admit I am a bit fuzzy on how commercial enterprises with trade secrets will respond to the “alignment.”) The new government agency will have oversight authority and will publish the work of its professionals. The government will not try to slow down or impede the “alignment.”

I have simplified most of the ideas for one reason. I want to conclude this essay with a single question, “How are today’s government agencies doing with homelessness, fiscal management, health care, and regulation of high-technology monopolies?”

Alignment? Yeah.

Stephen E Arnold, August 28, 2023

Software Marches On: Should Actors Be Worried?

August 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How AI Is Bringing Film Stars Back from the Dead” is going to raise hackles of some professionals in Hollywood. I wonder how many people alive today remember James Dean. Car enthusiasts may know about his driving skills, but not too much about his dramaturgical abilities. I must confess that I know zippo about Jimmy other than he was a driver prone to miscalculations.

8 20 angry writer

An angry human actor — recycled and improved by smart software — snarls, “I didn’t go to acting school to be replaced by software. I have a craft, and it deserves respect.” MidJourney, I only had to describe what I wanted one time. Keep on improving or recursing or whatever it is you do.

The Beeb reports:

The digital cloning of Dean also represents a significant shift in what is possible. Not only will his AI avatar be able to play a flat-screen role in Back to Eden and a series of subsequent films, but also to engage with audiences in interactive platforms including augmented reality, virtual reality and gaming. The technology goes far beyond passive digital reconstruction or deepfake technology that overlays one person’s face over someone else’s body. It raises the prospect of actors – or anyone else for that matter – achieving a kind of immortality that would have been otherwise impossible, with careers that go on long after their lives have ended.

The write up does not reference the IBM study suggesting that 40 percent of workers will require reskilling. I am not sure that a reskilled actor will be able to do. I polled my team and it came up with some Hollywood possibilities:

  1. Become an AI adept with a mastery of python, Java, and C. Code software replacing studio executives with a product called DorkMBA
  2. Channel the anger into a co-ed game of baseball and discuss enthusiastically with the umpire corrective lenses
  3. Start an anger management podcast and, like a certain Stanford professor, admit the indiscretions of one’s childhood
  4. Use MidJourney and ChatGPT to write a manga for Amazon
  5. Become a street person.

I am not sure these ideas will be acceptable to those annoyed by the BBC write up. I want to point out that smart software can do some interesting things. My hunch is that software can do endless versions of classic hits with old-time stars quickly and more economically than humanoid involved professionals.

I am not Bogarting you.

Stephen E Arnold, August 25, 2023

Generative AI: Not So Much a Tool But Something Quite Different

August 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Thirty years ago I had an opportunity to do a somewhat peculiar job. I had written for a publisher in the UK a version of a report my team and I prepared about Japanese investments in its Fifth Generation Computer Revolution or some such government effort. A wealthy person who owned a medium-sized financial firm asked me if I would comment on a book called The Meaning of the Microcosm. “Sure,” I said.

8 24 sea creature

This tiny, cute technology creature has just crawled from the ocean, and it is looking for lunch. Who knew that it could morph into a much larger and more disruptive beast? Thanks, MidJourney. No review committee for me this morning.

What I described was technology’s Darwinian behavior. I am not sure I was breaking new ground, but it seemed safe for me to point to how a technology survived. Therefore, I argued in a private report to this wealthy fellow, that if betting on a winner would make one rich. I tossed in an idea that I have thought about for many years; specifically, as technologies battle to “survive,” the technologies evolve and mutate. The angle I have commented about for many years is simple: Predicting how a technology mutates is a tricky business. Mutations can be tough to spot or just pop up. Change just says, “Hello, I am here.”

I thought about this “book commentary project” when I read “How ChatGPT Turned Generative AI into an Anything Tool.” The article makes a number of interesting observations. Here’s one I noted:

But perhaps inadvertently, these same changes let the successors to GPT3, like GPT3.5 and GPT4, be used as powerful, general-purpose information-processing tools—tools that aren’t dependent on the knowledge the AI model was originally trained on or the applications the model was trained for. This requires using the AI models in a completely different way—programming instead of chatting, new data instead of training. But it’s opening the way for AI to become general purpose rather than specialized, more of an “anything tool.”

I am not sure that “anything tool” is a phrase with traction, but it captures the idea of a technology that began as a sea creature, morphing, and then crawling out of the ocean looking for something to eat. The current hungry technology is smart software. Many people see the potential of combining repetitive processes with smart software in order to combine functions, reduce costs, or create alternatives to traditional methods of accomplishing a task. A good example is the use college students are making of the “writing” ability of free or low cost services like ChatGPT or

But more is coming. As I recall, in my discussion of the microcosm book, I made the point that Mr. Gilder’s point that small-scale systems and processes can have profound effects on larger systems and society as a whole. But a technology “innovation” like generative AI is simultaneously “small” and “large”. Perspective and point of view are important in software. Plus, the innovations of the transformer and the larger applications of generative AI to college essays illustrate the scaling impact.

What makes AI interesting for me at this time is that genetic / Darwinian change is occurring across the scale spectrum. On one hand, developers are working to create big applications; for instance, SaaS solutions that serve millions of users. On the other hand, shifting from large language models to smaller, more efficient methods of getting smart aim to reduce costs and speed the functioning of the plumbing.

The cited essay in Arstechnica is on the right track. However, the examples chosen are, it seems to me, ignoring the surprises the iterations of the technology will deliver. Is this good or bad? I have no opinion. What is important than wild and crazy ideas about control and regulation strike me as bureaucratic time wasting. It was millions a years ago to get out of the way of the hungry creature from the ocean of ones and zeros and try to figure out how to make catch the creature and have dinner, turn its body parts into jewelry which can be sold online, or processing the beastie into a heat-and-serve meal at Trader Joe’s.

My point is that the generative innovations do not comprise a “tool.” We’re looking at something different, semi-intelligent, and evolving with speed. Will it be let’s have lunch or one is lunch?

Stephen E Arnold, August 24, 2023

Thought Leader Thinking: AI Both Good and Bad. Now That Is an Analysis of Note

August 17, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read what I consider a “thought piece.” This type of essay discusses a topic and attempts to place it in a context of significance. The “context” is important. A blue chip consulting firm may draft a thought piece about forever chemicals. Another expert can draft a thought piece about these chemicals in order to support the companies producing them. When thought pieces collide, there is a possible conference opportunity, definitely some consulting work to be had, and today maybe a ponderous online webinar. (Ugh.)

8 17 don quixote

A modern Don Quixote and thought leader essay writer lines up a windmill and charges. As the bold 2023 Don shouts, “Vile and evil windmill, you pretend to grind grain but you are a mechanical monster destroying the fair land. Yield, I say.” The mechanical marvel just keeps on turning and the modern Don is ignored until a blade of the windmill knocks the knight to the ground.” Thanks, MidJourney. It only took three tries to get close to what I described. Outstanding evidence of degradation of function.

The AI Power Paradox: Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?” considers the “problem” of smart software. My recollection is that artificial intelligence and machine learning have been around for decades. I have a vivid recollection of a person named Marvin Weinberger I believe. This gentleman made an impassioned statement at an Information Industry Association meeting about the need for those in attendance to amp up their work with smart software. The year, as I recall, was 1981.

The thought piece does not dwell on the long history of smart software. The interest is in what the thought piece presents as it context; that is:

And generative AI is only the tip of the iceberg. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.

The excitement about smart software is sufficiently robust to magnetize those who write thought pieces. Is the outlook happy or sad? You judge. The essay asserts:

In May 2023, the G-7 launched the “Hiroshima AI process,” a forum devoted to harmonizing AI governance. In June, the European Parliament passed a draft of the EU’s AI Act, the first comprehensive attempt by the European Union to erect safeguards around the AI industry. And in July, UN Secretary-General Antonio Guterres called for the establishment of a global AI regulatory watchdog.

I like the reference to Hiroshima.

The thought piece points out that AI is “different.”

It does not just pose policy challenges; its hyper-evolutionary nature also makes solving those challenges progressively harder. That is the AI power paradox. The pace of progress is staggering.

The thought piece points out that AI or any other technology is “dual use”; that is, one can make a smart microwave or one can make a smart army of robots.

Where is the essay heading? Let’s try to get a hint. Consider this passage:

The overarching goal of any global AI regulatory architecture should be to identify and mitigate risks to global stability without choking off AI innovation and the opportunities that flow from it.

From my point of view, we have a thought piece which recycles a problem similar to squaring the circle.

The fix, according to the thought piece, is to create a “minimum of three AI governance regimes, each with different mandates, levers, and participants.

To sum up, we have consulting opportunities, we have webinars, and we have global regulatory “entities.” How will that work out? Have you tried to get someone in a government agency, a non-governmental organization, or federation of conflicting interests to answer a direct question?

While one waits for the smart customer service system to provide an answer, the decades old technology will zip along leaving thought piece ideas in the dust. Talk global; fail local.

Stephen E Arnold, August 17, 2023

Wanna Be an AI Entrepreneur? Part 2

August 17, 2023

MIT digital-learning dean Cynthia Breazeal and Yohana founder Yoky Matsuoka have a message for their entrepreneurship juniors. Forbes shares “Why These 50 Over 50 Founders Say Beware of AI ‘Hallucination’.” It is easy to get caught up in the hype around AI and leap into the fray before looking. But would-be AI entrepreneurs must approach their projects with careful consideration.

8 12 money machine

An entrepreneur “listens” to the AI experts. The AI machine spews money to the entrepreneur. How wonderful new technology is! Thanks, MidJourney for not asking me to appeal this image.

Contributor Zoya Hansan introduces these AI authorities:

“‘I’ve been watching generative AI develop in the last several years,’ says Yoky Matsuoka, the founder of a family concierge service called Yohana, and formerly a cofounder at Google X and CTO at Google Nest. ‘I knew this would blow up at some point, but that whole ‘up’ part is far bigger than I ever imagined.’

Matsuoka, who is 51, is one of the 20 AI maestros, entrepreneurs and science experts on the third annual Forbes 50 Over 50 list who’ve been early adopters of the technology. We asked these experts for their best advice to younger entrepreneurs leveraging the power of artificial intelligence for their businesses, and each one had the same warning: we need to keep talking about how to use AI responsibly.”

The pair have four basic cautions. First, keep humans on board. AI can often offer up false information, problematically known as “hallucinations.” Living, breathing workers are required to catch and correct these mistakes before they lead to embarrassment or even real harm. The founders also suggest putting guardrails on algorithmic behavior; in other words, impose a moral (literal) code on one’s AI products. For example, eliminate racial and other biases, or refuse to make videos of real people saying or doing things they never said or did.

In terms of launching a business, resist pressure to start an AI company just to attract venture funding. Yes, AI is the hot thing right now, but there is no point if one is in a field where it won’t actually help operations. The final warning may be the most important: “Do the work to build a business model, not just flashy technology.” The need for this basic foundation of a business does not evaporate in the face of hot tech. Learn from Breazeal’s mistake:

“In 2012, she founded Jibo, a company that created the first social robot that could interact with humans on a social and emotional level. Competition with Amazon’s Alexa—which takes commands in a way that Jibo, created as a mini robot that could talk and provide something like companionship, wasn’t designed to do—was an impediment. So too was the ability to secure funding. Jibo did not survive. ‘It’s not the most advanced, best product that wins,’ says Breazeal. ‘Sometimes it’s the company who came up with the right business model and figured out how to make a profit.'”

So would-be entrepreneurs must proceed with caution, refusing to let the pull of the bleeding edge drag one ahead of oneself. But not too much caution.

Cynthia Murrell, August 17, 2023

AI and Increasing Inequality: Smart Software Becomes the New Dividing Line

August 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Will AI Be an Economic Blessing or Curse?” engages is prognosticative “We will be sorry” analysis. Yep, I learned about this idea in Dr. Francis Chivers’ class about Epistemology at Duquesne University. Wow! Exciting. The idea is that knowing is phenomenological. Today’s manifestation of this mental process is in the “fake data” and “alternative facts” approach to knowledge.

8 8 cruising ai highway

An AI engineer cruising the AI highway. This branch of the road does not permit boondocking or begging. MidJourney disappointed me again. Sigh.

Nevertheless, the article makes a point I find quite interesting; specifically, the author invites me to think about the life of a peasant in the Middle Ages. There were some technological breakthroughs despite the Dark Ages and the charmingly named Black Death. Even though plows improved and water wheels were rediscovered, peasants were born into a social system. The basic idea was that the poor could watch rich people riding through fields and sometimes a hovel in pursuit of fun, someone who did not meet meet their quota of wool, or a toothsome morsel. You will have to identify a suitable substitute for the morsel token.

The write up points out (incorrectly in my opinion):

“AI has got a lot of potential – but potential to go either way,” argues Simon Johnson, professor of global economics and management at MIT Sloan School of Management. “We are at a fork in the road.”

My view is that the AI smart software speedboat is roiling the data lakes. Once those puppies hit 70 mph on the water, the casual swimmers or ill prepared people living in houses on stilts will be disrupted.

The write up continues:

Backers of AI predict a productivity leap that will generate wealth and improve living standards. Consultancy McKinsey in June estimated it could add between $14 trillion and $22 trillion of value annually – that upper figure being roughly the current size of the U.S economy.

On the bright side, the write up states:

An OECD survey of some 5,300 workers published in July suggested that AI could benefit job satisfaction, health and wages but was also seen posing risks around privacy, reinforcing workplace biases and pushing people to overwork.
“The question is: will AI exacerbate existing inequalities or could it actually help us get back to something much fairer?” said Johnson.

My view is not populated with an abundance of happy faces. Why? Here are my observations:

  1. Those with knowledge about AI will benefit
  2. Those with money will benefit
  3. Those in the right place at the right time and good luck as a sidekick will benefit
  4. Those not in Groups one, two, and three will be faced with the modern equivalent of laboring as a peasant in the fields of the Loire Valley.

The idea that technology democratizes is not in line with my experience. Sure, most people can use an automatic teller machine and a mobile phone functioning as a credit card. Those who can use, however, are not likely to find themselves wallowing in the big bucks of the firms or bureaucrats who are in the AI money rushes.

Income inequality is one visible facet of a new data flyway. Some get chauffeured; others drift through it. Many stand and marvel at rushing flows of money. Some hold signs with messages like “Work needed” or “Homeless. Please, help.”

The fork in the road? Too late. The AI Flyway has been selected. From my vantage point, one benefit will be that those who can drive have some new paths to explore. For many, maybe orders of magnitude more people, the AI Byway opens new areas for those who cannot afford a place to live.

The write up assumes the fork to the AI Flyway has not been taken. It has, and it is not particularly scenic when viewed from a speeding start up gliding on neural networks.

Stephen E Arnold, August 16, 2023

Wanna Be an AI Entrepreneur: Part 1, A How To from Crypto Experts

August 16, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

For those looking to learn more about AI, venture capital firm Andreessen Horowitz has gathered resources from across the Internet for a course of study it grandly calls the “AI Canon.” It is a VCs dream curriculum in artificial intelligence. Naturally, the authors include a link to each resource. The post states:

“Research in artificial intelligence is increasing at an exponential rate. It’s difficult for AI experts to keep up with everything new being published, and even harder for beginners to know where to start. So, in this post, we’re sharing a curated list of resources we’ve relied on to get smarter about modern AI. We call it the ‘AI Canon’ because these papers, blog posts, courses, and guides have had an outsized impact on the field over the past several years. We start with a gentle introduction to transformer and latent diffusion models, which are fueling the current AI wave. Next, we go deep on technical learning resources; practical guides to building with large language models (LLMs); and analysis of the AI market. Finally, we include a reference list of landmark research results, starting with ‘Attention is All You Need’ — the 2017 paper by Google that introduced the world to transformer models and ushered in the age of generative AI.”

Yes, the Internet is flooded with articles about AI, some by humans and some by self-reporting algorithms. Even this curated list is a bit overwhelming, but at least it narrows the possibilities. It looks like a good place to start learning more about this inescapable phenomenon. And while there, one can invest in the firm’s hottest prospects we think.

Cynthia Murrell, August 16, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta