Semantics and the Web: A Snort of Pisco?
November 16, 2021
I read a transcript for the video called “Semantics and the Web: An Awkward History.” I have done a little work in the semantic space, including a stint as an advisor to a couple of outfits. I signed confidentiality agreements with the firms and even though both have entered the well-known Content Processing Cemetery, I won’t name these outfits. However, I thought of the ghosts of these companies as I worked my way through the transcript. I don’t think I will have nightmares, but my hunch is that investors in these failed outfits may have bad dreams. A couple may experience post traumatic stress. Hey, I am just suggesting people read the document, not go bonkers over its implications in our thumbtyping world.
I want to highlight a handful of gems I identified in the write up. If I get involved in another world-saving semantic project, I will want to have these in my treasure chest.
First, I noted this statement:
“Generic coding”, later known as markup, first emerged in the late 1960s, when William Tunnicliffe, Stanley Rice, and Norman Scharpf got the ideas going at the Graphics Communication Association, the GCA. Goldfarb’s implementations at IBM, with his colleagues Edward Mosher and Raymond Lorie, the G, M, and L, made him the point person for these conversations.
What’s not mentioned is that some in the US government became quite enthusiastic. Imagine the benefit of putting tags in text and providing electronic copies of documents. Much better than loose-leaf notebooks. I wish I have a penny for every time I heard this statement. How does the government produce documents today? The only technology not in wide use is hot metal type. It’s been — what? — a half century?
Second, I circled this passage:
SGML included a sample vocabulary, built on a model from the earliest days of GML. The American Association of Publishers and others used it regularly.
Indeed wonderful. The phrase “slicing and dicing” captured the essence of SGML. Why have human editors? Use SGML. Extract chunks. Presto! A new book. That worked really well but for one drawback: The proliferation of wild and crazy “books” were tough to sell. Experts in SGML were and remain a rare breed of cat. There were SGML ecosystems but adding smarts to content was and remains a work in progress. Yes, I am thinking of Snorkel too.
Third, I like this observation too:
Dumpsters are available in a variety of sizes and styles. To be honest, though, these have always been available. Demolition of old projects, waste, and disasters are common and frequent parts of computing.
The Web as well as social media are dumpsters. Let’s toss in TikTok type videos too. I think meta meta tags can burn in our cherry red garbage container. Why not?
What do these observations have to do with “semantics”?
- Move from SGML to XML. Much better. Allow XML to run some functions. Yes, great idea.
- Create a way to allow content objects to be anywhere. Just pull them together. Was this the precursor to micro services?
- One major consequence of tagging or the lack of it or just really lousy tagging, marking up, and relying of software allegedly doing the heavy lifting is an active demand for a way to “make sense” of content. The problem is that an increasing amount of content is non textual. Ooops.
What’s the fix? The semantic Web revivified? The use of pre-structured, by golly, correct mark up editors? A law that says students must learn how to mark up and tag? (Problem: Schools don’t teach math and logic anymore. Oh, well, there’s an online course for those who don’t understand consistency and rules.)
The write up makes clear there are numerous opportunities for innovation. And the non-textual information. Academics have some interesting ideas. Why not go SAILing or revisit the world of semantic search?
Stephen E Arnold, November 16, 2021
Exposing Big Data: A Movie Person Explains Fancy Math
April 16, 2021
I am not “into” movies. Some people are. I knew a couple of Hollywood types, but I was dumbfounded by their thought processes. One of these professionals dreamed of crafting a motion picture about riding a boat powered by the wind. I think I understand because I skimmed one novel by Herman Melville, who grew up with servants in the house. Yep, in touch with the real world of fish and storms at sea.
However, perhaps an exception is necessary. A movie type offered some interesting ideas in the BBC “real” news story “Documentary Filmmaker Adam Curtis on the Myth of Big Data’s Predictive Power: It’s a Modern Ghost Story.” Note: This article is behind a paywall designed to compensate content innovators for their highly creative work. You have been warned.
Here are several statements I circled in bright True Blue marker ink:
- “The best metaphor for it is that Amazon slogan, which is: ‘If you like that, then you’ll like this,'” [Adam] Curtis [the documentary film maker]
- [Adam Curtis] pointed to the US National Security Agency’s failure to intercept a single terrorist attack, despite monitoring the communications of millions of Americans for the better part of two decades.
- [Big data and online advertising] a bit like sending someone with a flyer advertising pizzas to
the lobby of a pizza restaurant,” said Curtis. “You give each person one of those flyers as they come into the restaurant and they walk out with a pizza. “It looks like it’s one of your flyers that’s done it. But it wasn’t – it’s a pizza restaurant.”
Maybe I should pay more attention to the filmic mind. These observations strike me as accurate.
Predictive analytics, fancy math, and smart software? Ghosts.
But what if ghosts are real?
Stephen E Arnold, April 16, 2021
MIT Deconstructs Language
April 14, 2021
I got a chuckle from the MIT Technology Review write up “Big Tech’s Guide to Talking about AI Ethics.” The core of the write up is a list of token words like “framework”, “transparency”, by design”, “progress”, and “trustworthy.” The idea is that instead of explaining the craziness of smart software with phrases like “yeah, the intern who set up the thresholds is now studying Zen in Denver” or “the lady in charge of that project left in weird circumstances but I don’t follow that human stuff.” The big tech outfits which have a generous dollop of grads from outfits like MIT string together token words to explain what 85 percent confidence means. Yeah, think about it when you ask your pediatrician if the antidote given your child will work. Here’s the answer most parents want to hear: “Ashton will be just fine.” Parents don’t want to hear, “probably 15 out of every 100 kids getting this drug will die. Close enough for horse shoes.”
The hoot is that I took a look at MIT’s statements about Jeffrey Epstein and the hoo-hah about the money this estimable person contributed to the MIT outfit. Here are some phrases I selected plus their source.
- a thorough review of MIT’s engagements with Jeffrey Epstein (Link to source)
- no role in approving MIT’s acceptance of the donations. (Link to source)
- gifts to the Institute were approved under an informal framework (Link to source)
- for all of us who love MIT and are dedicated to its mission (Link to source)
- this situation demands openness and transparency (Link to source).
Yep, “framework”, “openness,” and “transparency.” Reassuring words like “thorough” and passive voice. Excellent.
Word tokens are worth what exactly?
Stephen E Arnold, April 14, 2021
Palantir Fourth Quarter Results Surprises One Financial Pundit
February 22, 2021
I read “Palantir Stock Slides As It Posts a Surprise Loss in Fourth Quarter.” The pundit noted:
Palantir stock has been very volatile this year. It is among the stocks that were been pumped by the Reddit group WallStreetBets. Palantir stock had a 52-week high of $45 amid frenzied buying. However, as has been the case with other meme stocks, it is down sharply from its recent highs. Based on yesterday’s closing prices, Palantir stock has lost almost 30% from its 52-week highs. The drawdown is much lower than what we’ve seen in stocks like GameStop and AMC Theatres. But then, the rise in Palantir stock was also not comparable to the massive gains that we saw in these companies.
Yikes. Worse than GameStop? Quite a comparison.
The pundit pointed out:
Palantir has been diversifying itself away from government business that currently accounts for the bulk of its revenues. This year, it has signed many deals that would help it diversify its revenues. Earlier this month, Palantir announced that it has extended its partnership with energy giant BP for five more years.
Who knew that a company founded in 2003 would have difficulty meeting Wall Street expectation? Maybe that IBM deal and the new US president’s administration can help Palantir Technologies meet financial experts’ expectations?
Search and content processing companies have been worn down by long sales cycles, lower cost competitors, and the friction of customization, training, and fiddling with content intake.
Palantir might be an exception. Stakeholders are discomfited by shocks.
Stephen E Arnold, February 22, 2021
Linear Math Textbook: For Class Room Use or Individual Study
October 30, 2020
Jim Hefferon’s Linear Algebra is a math textbook. You can get it for free by navigating to this page. From Mr. Hefferon’s Web page for the book, you can download a copy and access a range of supplementary materials. These include:
- Classroom slides
- Exercise sets
- A “lab” manual which requires Sage
- Video.
The book is designed for students who have completed one semester of calculus. Remember: Linear algebra is useful for poking around in search or neutralizing drones. Zaap. Highly recommended.
Stephen E Arnold, October 30, 2020
Text Analytics: Are These Really the Companies to Watch in the Next 12 Weeks?
October 16, 2020
DarkCyber spotted “Top 10 Text Analytics Companies to Watch in 2020.” Let’s take a quick look at some basic details about each firm:
Alkymi, founded in 2017, makes an email indexing system. The system, according to the company’s Web site, “understands documents using deep learning and visual analysis paired with your human in-the-loop expertise.” Interesting but text analytics appears to be a component of a much larger system. What’s interesting is that the business relies in some degree upon Amazon Web Services. The company’s Web site is https://alkymi.io/.
Aylien Ltd., based in Ireland, appears to be a company with text analysis technology. However, the company’s system is used to create intelligence reports for analysts; for example, government intelligence officers, business analysts, and media outlets. Founded in 2010, the company’s Web site is https://aylien.com.
Hewlett Packard Enterprise. The inclusion of HPE was a bit of a surprise. This outfit once owned the Autonomy technology, but divested itself of the software and services. To replace Autonomy, the company developed “Advanced Text Analysis” which appears to be an enterprise search centric system. The service is available as a Microsoft Azure function and offers 60 APIs (which seems particularly generous) “that deliver deep learning analytics on a wide range of data.” The company’s Web site is https://www.hpe.com/in/en/home.html. One product name jumped out: Ezmeral which maybe a made up word.
InData Labs lists data science, AI, AI driven mobile app development, computer vision, machine learning, data capture and optical character recognition, and big data solutions as its services. Its products include face recognition and natural language processing. Perhaps it is the NLP product which equates to text analytics? The firm’s Web site is https://indatalabs.com/ The company was founded in 2014 and operates from Belarus and has a San Francisco presence.
Kapiche, founded in 2016, focuses on “customer insights”. Customer feedback yields insight with “no set up, no manual coding, and results you can trust,” according to the company. The text analytics snaps into services like Survey Monkey and Google Forms, among others. Clients include Target and Toyota. The company is based in Australia with an office in Denver, Colorado. The firm’s Web site is https://www.kapiche.com. The firm offers applied text analytics.
Lexalytics, founded in 2003, was one of the first standalone text analytics vendors. The company’s system allows customers to “tell powerful stories from complex text data.” DarkCyber prefers to learn “stories” from the data, however. In the last 17 years, the company has not gone public nor been acquired. The firm’s Web site is https://www.lexalytics.com/.
MindGap. The MindGap identified in the article is in the business of providing “AI for business.” the company appears to be a mash up of artificial intelligence and “top tier strategy consulting:. That may be true, but we did not spot text analytics among the core competencies. The firm’s clients include Mail.ru, Gazprom, Yandex, and Huawei. The firm’s Web site is https://www.mindgap.dev/. The firm lists two employees on LinkedIn.
Primer has ingested about $60 million in venture funding since it was founded in 2015. The company ingests text and outputs reports. The company was founded by the individual who set up Quid, another analytics company. Government and business analysts consume the outputs of the Primer system. The company’s Web site is https://primer.ai.
Semeon Analytics, now a unit of Datametrex, provides “custom language and sentiment ontology” services. Indexing and entity extraction, among other NLP modules, allows the system to deliver “insight analysis, rapid insights, and sentiment of the highest precision on the market today.” The Semeon Web site is still online at https://semeon.com.
ThoughtTrace appears to focus on analysis of text in contracts. The firm’s Web site says that its software can “find critical contract facts and opportunities.” Text analytics? Possibly, but the wording suggests search and retrieval. The company has a focus on oil and gas and other verticals. The firm’s Web site is https://www.thoughttrace.com/. (Note that the design of the Web site creates some challenges for a person looking for information.) The company, according to Crunchbase, was founded in 1999, and has three employees.
Three companies are what DarkCyber would consider text analytics firms: Aylien, Lexalytics, and Primer. The other firms mash up artificial intelligence, machine learning, and text analytics to deliver solutions which are essentially indexing and workflow tools.
Other observations include:
- The list is not a reliable place to locate flagship vendors; specifically, only three of the 10 companies cited in the article could be considered contenders in this sector.
- The text analytics capabilities and applications are scattered. A person looking for a system which is designed to handle email would have to examine the 10 listings and work from a single pointer, Alkymi.
- The selection of vendors confuses technical disciplines; for example, AI, machine learning, NLP, etc.
The list appears to have been generated in a short Zoom meeting, not via a rigorous selection and analysis process. Perhaps one of the vendors’ text analytics systems could have been used. Primer’s system comes to mind as one possibility. But that, of course, is work for a real journalist today.
Stephen E Arnold, October 16, 2020
Natural Language Processing: Useful Papers Selected by an Informed Human
July 28, 2020
Nope, no artificial intelligence involved in this curated list of papers from a recent natural language conference. Ten papers are available with a mouse click. Quick takeaway: Adversarial methods seem to be a hot ticket. Navigate to “The Ten Must Read NLP/NLU Papers from the ICLR 2020 Conference.” Useful editorial effort and a clear, adult presentation of the bibliographic information. Kudos to jakubczakon.
Stephen E Arnold, July 27, 2020
Cambridge Analytica: Maybe a New Name and Some of the Old Methods?
December 29, 2019
DarkCyber spotted an interesting factoid in “HH Plans to Work with the Re-Branded Cambridge Analytica to Influence 2021 Elections.”
The new company, Auspex International, will keep former Cambridge Analytica director Mark Turnbull at the helm.
Who is HH? He is President Hakainde Hichilema, serving at this time in Zambia.
The business focus of Auspex is, according to the write up:
We’re not a data company, we’re not a political consultancy, we’re not a research company and we’re not necessarily just a communications company. We’re a combination of all four.—Ahmad *Al-Khatib, a Cairo born investor
You can obtain some information about Auspex at this url: https://www.auspex.ai/.
DarkCyber noted the use of the “ai” domain. See the firm’s “What We Believe” information at this link. It is good to have a reason to get out of bed in the morning.
Stephen E Arnold, December 29, 2019
Google Trends Used to Reveal Misspelled Wirds or Is It Words?
November 25, 2019
We spotted a listing of the most misspelled words in each of the USA’s 50 states. Too bad Puerto Rico. Kentucky’s most misspelled word is “ninety.” Navigate to Considerable and learn what residents cannot spell. How often? Silly kweston.
The listing includes some bafflers and may reveal what can go wrong with data from an online ad sales data collection system; for example:
- Washington, DC (which is not a state in DarkCyber’s book) cannot spell “enough”; for example, “enuf already with these televised hearings and talking heads”
- Idaho residents cannot spell embarrassed, which as listeners to Kara Swisher know has two r’s and two s’s. Helpful that.
- Montana residents cannot spell “comma.” Do those in Montana use commas?
- And not surprisingly, those in Tennessee cannot spell “intelligent.” Imagine that!
What happens if one trains smart software on these data?
Sumthink mite go awf the railz.
Stephen E Arnold, November 25, 2019
Gender Bias in Old Books. Rewrite Them?
October 9, 2019
Here is an interesting use of machine learning. Salon tells us “What Reading 3.5 Million Books Tells Us About Gender Stereotypes.” Researchers led by University of Copenhagen’s Dr. Isabelle Augenstein analyzed 11 billion English words in literature published between 1900 and 2008. Not surprisingly, the results show that adjectives about appearance were most often applied to women (“beautiful” and “sexy” top the list), while men were more likely to be described by character traits (“righteous,” “rational,” and “brave” were most frequent). Writer Nicole Karlis describes how the team approached the analysis:
“Using machine learning, the researchers extracted adjectives and verbs connected to gender-specific nouns, like ‘daughter.’ Then the researchers analyzed whether the words had a positive, negative or neutral point of view. The analysis determined that negative verbs associated with appearance are used five times more for women than men. Likewise, positive and neutral adjectives relating to one’s body appearance occur twice as often in descriptions of women. The adjectives used to describe men in literature are more frequently ones that describe behavior and personal qualities.
“Researchers noted that, despite the fact that many of the analyzed books were published decades ago, they still play an active role in fomenting gender discrimination, particularly when it comes to machine learning sorting in a professional setting. ‘The algorithms work to identify patterns, and whenever one is observed, it is perceived that something is “true.” If any of these patterns refer to biased language, the result will also be biased,’ Augenstein said. ‘The systems adopt, so to speak, the language that we people use, and thus, our gender stereotypes and prejudices.’” Augenstein explained this can be problematic if, for example, machine learning is used to sift through employee recommendations for a promotion.”
Karlis does list some caveats to the study—it does not factor in who wrote the passages, what genre they were pulled from, or how much gender bias permeated society at the time. The research does affirm previous results, like the 2011 study that found 57% of central characters in children’s books are male.
Dr. Augenstein hopes her team’s analysis will raise awareness about the impact of gendered language and stereotypes on machine learning. If they choose, developers can train their algorithms on less biased materials or program them to either ignore or correct for biased language.
Cynthia Murrell, October 9, 2019