January 25, 2016
I know some “real” journalists and employees of “real” publishing outfits like Pearson want to work at the Alphabet Google thing? Sounds interesting, doesn’t it.
Navigate to “41 of the Trickiest Questions Google Will Ask You in a Job Interview” to get the inside scoop on landing that perfect gig. Just think. T shirts, mouse pads with the Google logo, maybe a day at Google I/O.
I have a copy of the GLAT or Google Labs Aptitude Test. I believe this bit of tomfoolery was retired years ago, and I can say with some confidence that the questions presented by the UK newspaper may not do the trick for unemployed journalists and/or professional publishers riffed by outfits like Pearson.
Here are three questions to land you in your dream cubicle:
Math and probability: A coin was flipped 1000 times and there were 560 heads. Do you think the coin is biased? — Quantitative Analyst, September 2015
Search: How many ways can you think of to find a needle in a haystack? — Business Associate, May 2014
Fortune telling: How do you think the digital advertising world will change in the next 3 years? — Creative Director, January 2016
Those who want the thrill of the Alphabet Google life are now able to begin their preparatory work.
Oh, here’s another question:
Self entertainment for life’s sound track: If you could only choose one song to play every time you walked into a room for the rest of your life, what would it be? — Associate Account Strategist Interview, March 2014
Tip: Don’t choose a track from an Apple service.
Stephen E Arnold, January 25, 2016
December 15, 2015
I read “How You Should Explain Big Data to Your CEO.” The write up included a link which triggered thoughts of how enterprise search dug itself a hole and climbed. Unable to extricate itself from a problem enterprise search vendors created, the entire sector has been marginalized. In some circles, enterprise search is essentially a joke. “Did you hear about the three enterprise search vendors who walked into a bar?” The bartender says, “What is this? Some kind of joke?”
The link pointed me to a Slideshare (owned by the email and job hunting champion LinkedIn). That presentation, “5 Signs Your Big Data Project is Doomed to Fail,” could have been borrowed from one of my talks about enterprise search in 2001. It was not, but the basic message was identical: Big Data has created a situation in which there are some challenges here and now.
The presentation was prepared by Qubole (maybe pronounced cue ball?). Qubole is a click to query outfit. This means that reports from Big Data are easy to generate.
Here are the problems Big Data faces:
- Failed implementations. Qubole asserts that 87 percent of the Big Data implementations are flops
- 73 percent of executive describe the Big Data project as flop
- 45 percent of Big Data projects are completed
These data are similar to the results of “satisfaction” with enterprise search solutions.
Why? Qubole asserts:
- Inaccurate project scope
- Inadequate management support
- No business case
- Lack of talent (in search the talent may be present but overestimates its ability to deal with enterprise search technologies and processes)
- “Challenging tools.” I think this means that in the Big Data world there are lots of complexities.
What can one charged with either search or Big Data tasks do with this information?
My view is, “Ignore it.”
The “can do” spirit carries professionals forward. Hiring a consultant provides some job protection but does little to reverse the failure and disappointment rate.
My view is that the willingness of executives to grab at a magic solution presented by a showman marketer overrides failure date. The arrogance of those involved create a “that won’t happen to us” belief.
Who is to blame? The company for believing in baloney? The marketer for painting a picture and showing a Hollywood style demo? The developers who created the Big Data solution, knowing that chunks were not complete or just did not work before the ship date? The in house engineers who lacked self knowledge to understand their own limitations?
Everyone is in the hole with the enterprise software vendors. The hole is deep. Magic solutions are difficult to pin down. The future of Big Data is likely to parallel to some degree the dismal track record of enterprise search. Fascinating. I can hear the mid tier consultants and the handful of remaining enterprise search vendors asserting that Qubole’s points are not applicable to their specific situation.
Yep, and I believe in the tooth fairy and Santa.
Stephen E Arnold, December 15, 2015
November 17, 2015
I read “Gawker Media’s Data Guru Presents the Case for Deleting Data.” The main idea is that hoarding permits a reality TV program. Hoarding data may not be good TV.
The write up points out that data cleaning is not cheap. Storage also costs money.
A Gawker wizard is quoted as saying:
We effectively are setting traps in our data sets for our future selves and our colleagues… Increasingly, I find that eliminating this data from our databases is the best solution. Gawker’s traffic data is maintained for just a few months. In our own logs and databases, we only have traffic data since February. and even that’s of limited use: We’ll toss some of it before the end of the year.
Seems reasonable. However, there may be instances when dumping or just carelessly overwriting log files might not be expedient or legal. For example, in one government agency, the secretary’s “bonus” depends on showing how Internet site usage relates to paperwork reduction. The idea is that when a “customer” of the government uses a Web site and does not show up in person at an office to fill out a request, the “customer” allegedly gets better service and costs, in theory, should drop. Also, some deals require that data be retained. You can use your imagination if you are an ISP in a country recently attacked by terrorists and your usage logs are “disappeared.” SEC and IRS retention guidelines? Worth noting in some cases.
The question is, “Are data really gone once deleted?” The fact of automatic backups, services in the middle routinely copying data, and other ways of creating unobserved backups may mean that deleted data can come back to life.
Pragmatism and legal constraints as well as the “men in the middle” issue can create zombie data, which, unlike the fictional zombies, can bite.
Stephen E Arnold, November 17, 2015
November 11, 2015
I have bumped against digital initiatives in government and industry a number of times. The experience and understanding I gained were indispensible. Do you remember the “paperless office”? The person attributed with creating this nifty bit of jargon was, if memory serves me, Harvey Poppel. I worked with the fellow who coined this term. He also built a piano. He became an investment wizard.
Later I met a person deeply involved with reducing paperwork in the US government. The fellow, an authoritative individual, ran an advertising and marketing company in Manhattan. I recall that he was proud of his work on implementing strategies to reduce dead tree paper in the US government. I am not sure what happened to him or his initiative. I know that he went on to name a new basketball arena, selecting a word in use as the name of a popular vitamin pill.
Then a mutual acquaintance explained the efforts of an expert who wrote a book about Federal digitalization. I enjoyed his anecdotes. I was, at the time, working as an advisor to a government unit involved in digital activities, but the outfit ran on paper. Without paper, the Lotus Notes system could not be relied upon to make the emails and information about the project available. The fix? Print the stuff on paper. The idea was to go digital, but the information highway was built on laser printer paper.
I thought about these interactions when I read “A Decade into a Project to Digitize U.S. Immigration Forms, Just 1 is Online.” (If the link is dead, please, contact the dead tree publisher, not me.)
According the article:
Heaving under mountains of paperwork, the government has spent more than $1 billion trying to replace its antiquated approach to managing immigration with a system of digitized records, online applications and a full suite of nearly 100 electronic forms. A decade in, all that officials have to show for the effort is a single form that’s now available for online applications and a single type of fee that immigrants pay electronically. The 94 other forms can be filed only with paper.
I am not surprised. The article uses the word “mismanaged” to describe the process upon which the development wheels would turn.
The write up included a quote to note:
“You’re going on 11 years into this project, they only have one form, and we’re still a paper-based agency,’’ said Kenneth Palinkas, former president of the union that represents employees at the immigration agency. “It’s a huge albatross around our necks.”
What’s interesting is that those involved seem to be trying very hard to implement a process which puts data in a database, displays information online, and reduces the need for paper, the stuff from dead trees.
The article suggests that one vendor (IBM) was involved in the process:
IBM had as many as 500 people at one time working on the project. But the company and agency clashed. Agency officials, for their part, held IBM responsible for much of the subsequent failure, documents show.
The company’s initial approach proved especially controversial. Known as “Waterfall,” this approach involved developing the system in relatively long, cascading phases, resulting in a years-long wait for a final product. Current and former federal officials acknowledged in interviews that this method of carrying out IT projects was considered outdated by 2008.
Several observations are warranted, but these are unlikely to be particularly life affirming:
- The management process is usually not focused on delivering a functioning system. The management process is designed to permit billing and cause meetings. The actual work appears to be cut off from these administrative targets of having something to do and sending invoices for services rendered.
- Like other interesting government projects such as the upgrading of the IRS or the air traffic control system, figuring out what to do and how to do it are sufficiently complex that everyone involved dives into details, political considerations, and briefings. Nothing much comes from these activities, but they constitute “work” so day to day, week to week, month to month, and year to year process becomes its own goal. The new system remains an abstraction.
- No one working on a government project, including government professionals and contractors, has responsibility to deliver a solution. Projects become a collection of fixes, which are often demonstrations of a small scale function. The idea that a comprehensive system will actually deliver a function results in software quite similar to the famous HealthCare.gov service.
I am tempted to mention other US government initiatives. I won’t. Shift to the United Kingdom. That country has been working on its National Health Service systems for many years. How similar have been the initiatives to improve usability, functionality, and various reductions. These have ranged from cost reduction to waiting time reduction. The project is not that different from US government efforts.
What’s the fix?
Let me point out that digitization, computerization, and other Latinate nominatives are fated to remain in a state of incompletion. How can one finish when when the process, not the result, is the single most important objective.
I heard that some units of Angela Merkel’s government are now using traditional typewriters. Ah, progress.
Stephen E Arnold, November 11, 2015
October 28, 2015
I read “IBM Case manager Provides Tailored Content Management.” For those of you not keeping track of IBM’s product line, may I share with you that IBM Case Manager is a wrapper perched on top of FileNet?
In 2006, yep, nine short years ago, IBM purchased FileNet for $1.6 billion. I stumbled literally upon FileNet when I was doing some work for one of the financial outfits who once found me mildly amusing. FileNet was founded in 1982. The company scanned checks and other documents, stored the images on optical discs, and made the contents searchable—sort of.
The hardware was pure 1982: Big machines, big scanners, and lots of humans doing tasks. Over time, FileNet updated its human dependent system to become more automated. FileNet was a proprietary set up and required lots of engineers, programmers, and specialists to set up the system and keep it humming along at 2 am when most back office operations were performed in the 1980s.
FileNet is still available. But IBM has created applications which are designed to make the system more saleable in today’s market. The IBM Case Manager includes FileNet and workflow, collaboration, and compliance tools. You can now run FileNet from a mobile device. When I first stubbed my toe on a giant scanning system, folks were using nifty green screens. Progress.
The 1980s are gone. IBM now delivers a case manager. Keep in mind, gentle reader, that case management is a solution keenly desired by many in law enforcement and certain intelligence disciplines. The US government continues to search for a case management system that meets its various units’ requirements. I would suggest that some of the products available as commercial off the shelf software do not do the job. But let’s focus on what the article reveals about IBM Case Manager.
The article points out that IBM Case Manager includes these components:
- A unified interface. Always good for a busy user.
- A data capture and parsing module.
- Information life cycle tools. The idea is that one can comply with Federal regulations and make darned sure information has a “kill on” date.
- A content manager which “provides features for capture, workflow, collaboration, and compliance on both mobile and desktop [devices].”
- SoftLayer which makes IBM Case Manager a cloud application. But licensees can install the system on premises or use a hybrid approach which can be exciting for engineers and investigators.
But the big news in the article is contained in this passage, which I circled in dollar bill green:
Analytics, which is a set of packages that includes IBM Watson, which can glean insight from business content, present that insight in the right context, and identify patterns and trends.
I did not know that IBM Case Management included Watson. My understanding was that Watson was the new IBM; therefore, Watson includes IBM Case Management.
Perhaps this is a minor point, but since we are dealing with technology from the 1980s, open source code, and wrappers which add a range of user experience features, I think getting the horse and cart lined up correctly can be helpful at times.
Another remarkable revelation in the article is that IBM Case Manager is for “enterprise of all sizes.” There you go. The local Harrod’s Creek, Kentucky, car wash and grocery can use IBM Case Management with Watson to help the proprietors deal with their information demands.
May I suggest that FileNet, regardless of its name, is appropriate for outfits like banks, hospitals, and meaty government agencies.
I also learned:
IBM Case Manager can be used to monitor social media sites to get a reading on public sentiment on a given subject or brand. Case Manager can also provide collaboration with social media platforms.
I have updated my Watson files and noted that IBM Case Manager includes Watson or is it the other way around?
Stephen E Arnold, October 28, 2015
October 15, 2015
I like the idea of blaming what some MBA whiz called exogenous events. The idea is that hapless, yet otherwise capable senior managers, are unable to deal with the ups and downs of running a business. In short, an exogenous event is a variation on “it’s not my fault,” “there’s little I can do,” and “let’s just muddle forward.” The problem is that hunting for scapegoats is not a way to generate revenue. Wait. One can raise subscription fees.
I read “Netflix Is Blaming Slow US Growth on the Switch to Chip-Based Credit Cards.” The write up references a letter, allegedly written by the Netflix top dog. I noted this passage:
In his letter to investors, Netflix CEO Reed Hastings partially blamed America’s recent switch to chip-enabled credit cards. As credit card companies send new cards to their customers, some have been issuing new numbers, as well. And if people forget to update their credit card number with Netflix, they can’t pay their bill and become what Hastings called “involuntary churn.”
I like that involuntary churn. I remember working on a project for a telecommunications company in which churn was a burr under the saddle of some executives. Those pesky customers. Darn it.
The write up ignores the responsibility of management to deal with exogenous events. When a search system fails, is it the responsibility of customers to fix the system. Nah, users just go to a service that seems to work.
I interpreted this alleged explanation and the article’s willingness to allow Netflix’s management to say, in effect, “Hey, this is not something I can do anything about.” If not the top dog, who takes responsibility? Perhaps the reason is not chip enabled credit cards? Perhaps users are sending Netflix a signal about sometimes unfindable content, clunky search, and a lack of features. Not everyone is a binge watcher. Some folks look for Jack Benny films or other entertainment delights. When these are not available, perhaps some look elsewhere. See and you shall find often delivers the goods.
Stephen E Arnold, October 15, 2015
October 8, 2015
I remember creating a document, copying the file to a floppy, and then walking up one flight of steps to give the floppy to my boss. He took the floppy, loaded it into his computer, and made changes. A short time later he would walk down one flight of steps, hand me the floppy with his file on it, and I would review the changes.
I thought this was the cat’s pajamas for two reasons:
- I did not have to use ledger paper, sharpen a pencil, and cramp my fingers
- Multiple copies existed so I no longer had to panic when I spilled my Fresca across my desk.
Based on the baloney I read every day about the super wonderful high speed, real time cloud technology, I was shocked when I read “Snowball’s Chance in Hell? Amazon Just Launched a Physical Data Transfer Service.” The news struck me as more important than the yap and yammer about Amazon disrupting cloud business and adding partners.
Here’s the main point I highlighted in pragmatic black:
A Snowball device is ordered through the AWS Management Console and is delivered to site within a few days; customers can order multiple devices and devices can be run in parallel. Described as coming in its “own shipping container” (it doesn’t require packing or unpacking) the Snowball is entirely self-contained, complete with 110 Volt power, a 10 GB network connection on the back and an E Ink display/control panel on the front. Once received it’s simply a matter of plugging the device in, connecting it to a network, configuring the IP address, and installing the AWS Snowball client; a job manifest and 25 character unlock code complete the task. When the transfer of data is complete the device is disconnected and a shipping label will automatically appear on the E Ink display; once shipped back to Amazon (currently only the Oregon data center is supporting the service, with others to follow) the data will be decrypted and copied to S3 bucket(s) as specified by the customer.
There you go. Sneaker net updated with FedEx, UPS, or another shipping service. Definitely better than carrying an appliance up and down stairs. I was hoping that individuals participating in the Mechanical Turk system would be available to pick up an appliance and deliver it to the Amazon customer and then return the gizmo to Amazon. If Amazon can do Etsy-type stuff, it can certainly do Uber-type functions, right?
When will the future arrive? No word on how the appliance will interact with Amazon’s outstanding search system. I wish I knew how to NOT out unpublished books or locate mysteries by Japanese authors available in English. Hey, there is a sneaker net. Focus on the important innovations.
Stephen E Arnold, October 8, 2015
September 21, 2015
I read “How Healthcare.gov Botched $600 Million worth of Contracts.” My initial reaction was that the $600 million figure understated the fully loaded costs of the Web site. I have zero evidence about my view that $600 million was the incorrect total. I do have a tiny bit of experience in US government project work, including assignments to look into accounting methods in procurements.
The write up explains that a an audit by the Office of the Health and Human Services office of Inspector General identified the root causes of the alleged $600 million Healthcare.gov Web site. The source document was online when I checked on September 21, 2015, at this link. If you want this document, I suggest you download it. Some US government links become broken when maintenance, interns, new contractors, or site redesigns are implemented.
The news story, which is the hook for this blog post, does a good job of pulling out some of the data from the IG’s report; for example, a list of “big contractors behind Healthcare.gov.” The list contains few surprises. Many of the names of companies were familiar to me, including that of Booz, Allen, where I once labored on a range of projects. There are references to additional fees from scope changes. I am confident, gentle reader, that you are familiar with scope creep. The idea is that the client, in the case of Healthcare.gov, needed to modify the tasks in the statement of work which underpins the contracts issued to the firms which perform the work. The government method is to rely on contractors for heavy lifting. The government professionals handle oversight, make certain the acquisition guidelines are observed, and plug assorted types of data into various US government back office systems.
The news story repeated the conclusion of the IG’s report that better training was need to make the Healthcare.gov type of project work better in the future.
My thoughts are that the news story ignored several important factors which in my experience provided the laboratory in which this online commerce experiment evolved.
First, the notion of a person in charge is not one that I encountered too often in my brushes with the US government. Many individuals change jobs, rotating from assignment to assignment, so newcomers are often involved after a train has left the station. In this type of staffing environment, the enthusiasm for digging deep and re-rigging the ship is modest or secondary to other tasks such as working on budgets for the next fiscal year, getting involved in new projects, or keeping up with the meetings which comprise the bulk of a professional’s work time. In short, decisions are not informed by a single individual with a desire to accept responsibility for a project. The ship sails on, moved by the winds of decisions by those with different views of the project. The direction emerges.
Second, the budget mechanisms are darned interesting. Money cannot be spent until the project is approved and the estimated funds are actually transferred to an account which can be used to pay a contractor. The process requires that individuals who may have never worked on a similar project create a team which involves various consultants, White House fellows, newly appointed administrators, procurement specialists with law degrees, or other professionals to figure out what is going to be done, how, what time will be allocated and converted to estimates of cost, and the other arcana of a statement of work. The firms who make a living converting statements of work into proposals to do the actual work. At this point, the disconnect between the group which defined the SOW and the firms bidding on the work becomes the vendor selection process. I will not explore vendor selection, an interesting topic outside the scope of this blog post. Vendors are selected and contracts written. Remember that the estimates, the timelines, and the functionality now have to be converted into the Healthcare.gov site or the F-35 aircraft or some other deliverable. What happens if the SOW does not match reality? The answer is a non functioning version of Healthcare.gov. The cause, gentle reader, is not training.
Third, the vendors, bless their billable hearts, now have to take the contract which spells out exactly what the particular vendor is to do and then actually do it. What happens if the SOW gets the order of tasks wrong in terms of timing? The vendors do the best they can. Vendors document what they do, submit invoices, and attend meetings. When multiple vendors are involved, the meetings with oversight professionals are not the places to speak in plain English about the craziness of the requirements or the tasks specified in the contract. The vendors do their work to the best of their ability. When the time comes for different components to be hooked together, the parts usually require some tweaking. Think rework. Scope change required. When the go live date arrives, the vendors flip the switches for their part of the project and individuals try to use the system. When these systems do not work, the problem is a severe one. Once again: training is not the problem. The root cause is that the fundamental assumptions about a project were flawed from the git go.
Is there a fix? In the case of Healthcare.gov, there was. The problem was solved by creating the equivalent of a technical SWAT team, working in a very flexible manner with procurement requirements, and allocating money without the often uninformed assumptions baked into a routine procurement.
Did the fix cost money? Yes, do I know how much? No. My hunch is that there is zero appetite in the US government, at a “real” news service, a watchdog entity, or an in house accountant to figure out the total spent for Healthcare.gov. Why do I know this? The accounting systems in use by most government entities are not designed to roll up direct and indirect costs with a mouse click. Costs are scattered and methods of payment pretty darned crazy.
Net net: Folks can train all day long. If that training focuses on systems and methods which are disconnected from the deliverable, the result is inefficiency, a lack of accountability, and misdirection from the root cause of a problem.
I have been involved in various ways with government work in the US since the early 1970s. One thing remains consistent: The foundational activities are uneven. Will the procurement process change? Forty years ago I used to think that the system would evolve. I was wrong.
Stephen E Arnold, September 21, 2015
September 11, 2015
I know what a printer is. The machine accepts instructions and, if the paper does not jam, outputs something I can read. Magic.
I find it interesting to contemplate my printers and visualize them as an enterprise content management system. Years ago, my team and I had to work on a project in the late 1990s involving a Xerox DocuTech scanner and printer. The idea was that the scanner would convert a paper document to an image with many digital features. Great idea, but the scanner gizmo was not talking to the printer thing. We got them working and shipped the software, the machines, and an invoice to the client. Happy day. We were paid.
The gap between that vision from a Xerox unit and the reality of the hardware was significant. But many companies have stepped forward to convert knowledge resident systems relying on experienced middle managers to hollowed out outfits trying to rely on software. My recollection is that Fulcrum Technologies nosed into this thorn bush with DOCSFulcrum a decade before the DocuTech was delivered by a big truck to my office. And, not to forget our friends to the East, the French have had a commitment to this approach to information access. Today, one can tap Polyspot or Sinequa for business process centric methods.
The question is, “Which of these outfits is making enough money to beat the dozens of outfits running with the other bulls in digital content processing land?” (My bet is on the completely different animals described in my new study CyberOSINT: Next Generation Information Access.)
Years later I spoke with an outfit called Brainware. The company was a reinvention of an earlier firm, which I think was called SER or something like that. Brainware’s idea was that its system could process text which could be scanned or in a common file format. The index allowed a user to locate text matching a query. Instead of looking for words, Brainware system used trigrams (sequences of three letters) to locate similar content.
Similar to the Xerox idea. The idea is not a new one.
I read two write ups about Lexmark, which used to be part of IBM. Lexmark is just down the dirt road from me in Lexington, Kentucky. Its financial health is a matter of interest for some folks in there here parts.
The first write up was “How Lexmark Evolved into an Enterprise Content Management Contender.” The main idea pivots on my knowing what content management is. I am not sure what this buzzword embraces. I do know that organizations have minimal ability to manage the digital information produced by employees and contractors. I also know that most organizations struggle with what their employees do with social media. Toss in the penchant units of a company have for creating information silos, and most companies look for silver bullets which may solve a specific problem in the firm’s legal department but leave many other content issues flapping in the wind.
According to the write up:
Lexmark is "moving from being a hardware provider to a broader provider of higher-value solutions, which are hardware, software and services," Rooke [a Lexmark senor manager] said.
Easy to say. The firm’s financial reports suggest that Lexmark faces some challenges. Google’s financial chart for the outfit displays declining revenues and profits:
The Brainware, ISYS Search Software, and Kofax units have not been able to provide the revenue boost I expected Lexmark to report. HP and IBM, which have somewhat similar strategies for their content processing units, have also struggled. My thought is that it may be more difficult for companies which once were good at manufacturing fungible devices to generate massive streams of new revenue from fuzzy stuff like software.
The write up does not have a hint of the urgency and difficulty of the Lexmark task. I learned from the article:
Lexmark is its own "first customer" to ensure that its technologies actually deliver on the capabilities and efficiency gains promoted by the company, Moody [Lexmark senior manager] said. To date, the company has been able to digitize and automate incoming data by at least 90 percent, contributing to cost reductions of 25 percent and a savings of $100 million, he reported. Cost savings aside, Lexmark wants to help CIOs better and more efficiently incorporate unstructured data from emails, scanned documents and a variety of other sources into their business processes.
The sentiment is one I encountered years ago. My recollection is that the precursor of Convera explained this approach to me in the 1980s when the angle was presented as Excalibur Technologies.
The words today are as fresh as they were decades ago. The challenge, in my opinion, remains.
I also read “How to Build an Effective Digital Transaction Management Platform.” This article is also eWeek, from the outfit which published “How Lexmark Evolved” piece.
What does this listicle state about Lexmark?
I learned that I need a digital transaction management system. A what? A DTM looks like workflow and information processing. I get it. Digital printing. Instead of paper, a DTM allows a worker to create a Word file or an email. Ah, revolutionary. Then a DTM automates the workflow. I think this is a great idea, but I seem to recall that many companies offer these services. Then I need to integrate my information. There goes the silo even if regulatory or contractual requirements suggest otherwise. Then I can slice and dice documents. My recollection is that firms have been automating document production for a while. Then I can use esignatures which are trustworthy. Okay. Trustworthy. Then I can do customer interaction “anytime, anywhere.” I suppose this is good when one relies on innovative ways to deal with customer questions about printer drivers. And I cannot integrate with “enterprise content management.” Oh, oh. I thought enterprise content management was sort of a persistent, intractable problem. Well, not if I include “process intelligence and visibility.” Er, what about those confidential documents relative to a legal dispute?
The temporal coincidence of a fluffy Lexmark write up and the listicle suggest several things to me:
- Lexmark is doing the content marketing that public relations and advertising professionals enjoy selling. I assume that my write up, which you are reading, will be an indication of the effectiveness of this one-two punch.
- The financial reports warrant some positive action. I think that closing significant deals and differentiating the Lexmark services from those of OpenText and dozens of other firms would have been higher on the priority list.
- Lexmark has made a strategic decision to use the rocket fuel of two ageing Atlas systems (Brainware and ISYS) and one Saturn system (Kofax’s Kapow) to generate billions in new revenue. I am not confident that these systems can get the payload into orbit.
Net net: Lexmark is following a logic path already stomped on by Hewlett Packard and IBM, among others. In today’s economic environment, how many federating, digital business process, content management systems can thrive?
My hunch is that the Lexmark approach may generate revenue. Will that revenue be sufficient to compensate for the decline in printer and ink revenues?
What are Lexmark’s options? Based on these two eWeek write ups, it seems as if marketing is the short term best bet. I am not sure I need another buzzword for well worn concepts. But, hey, I live in rural Kentucky and know zero about the big city views crafted down the road in Lexington, Kentucky.
Stephen E Arnold, September 11, 2015
August 11, 2015
We’ve come across a well-penned article about the intersection of language and search engine optimization by The SEO Guy. Self-proclaimed word-aficionado Ben Kemp helps website writers use their words wisely in, “Language, Linguistics, Semantics, & Search.” He begins by discrediting the practice of keyword stuffing, noting that search-ranking algorithms are more sophisticated than some give them credit for. He writes:
“Search engine algorithms assess all the words within the site. These algorithms may be bereft of direct human interpretation but are based on mathematics, knowledge, experience and intelligence. They deliver very accurate relevance analysis. In the context of using related words or variations within your website, it is one good way of reinforcing the primary keyword phrase you wish to rank for, without over-use of exact-match keywords and phrases. By using synonyms, and a range of relevant nouns, verbs and adjectives, you may eliminate excessive repetition and more accurately describe your topic or theme and at the same time, increase the range of word associations your website will rank for.”
Kemp goes on to lament the dumbing down of English-language education around the world, blaming the trend for a dearth of deft wordsmiths online. Besides recommending that his readers open a thesaurus now and then, he also advises them to make sure they spell words correctly, not because algorithms can’t figure out what they meant to say (they can), but because misspelled words look unprofessional. He even supplies a handy list of the most often misspelled words.
The development of more and more refined search algorithms, it seems, presents the opportunity for websites to craft better copy. See the article for more of Kemp’s language, and SEO, guidance.
Cynthia Murrell, August 11, 2015