I was Tim’s editor at Apress when he wrote this piece for the book, Oracle Insights: Tales of the Oak Table. I’d like to thank Gary Cornell and Jonathan Gennick at Apress, as well as Tim himself, for allowing me to reproduce it (in slightly edited down form) on Simple-Talk.
The book was meant to be something “completely different”. Rather than the usual, “here is feature x, here is an example of how to use it“, it was an attempt to teach people some truths about database development through the real experiences and stories of a bunch of guys (“the OakTable network”), who had spent much of their life doing this stuff.
I think it was largely successful, and in terms of the sort of thing we were gunning for, Tim’s contribution hit the nail pretty squarely on the head. It may seem odd to be presenting an “Oracle” article on a “SQL Server” site, but you don’t need to know Oracle to get something out of the book or Tim’s piece, and the lessons it teaches are applicable to anyone who develops database-driven software. Like all good stories it has an intriguing opening, a strong narrative and a killer ending. It remains not only my favorite chapter in the book, but one of my favorite chapters of all the books I edited.
I hope you enjoy it as much as I did.
—–End Editor’s Note—-
In my undergraduate career, I took a single course in philosophy, which was a requirement for my liberal arts degree in economics. Up to that point, my only exposure to philosophy was Monty Python, and the lyrics of overwrought 1960s folk songs and overblown 1970s rock songs. The course was entitled “Introduction to Ethics” and in the first session, the professor asked the class, “How many people are here just to fulfill a prerequisite?” It was apparent that the only hands that weren’t raised belonged to those who didn’t know the meaning of the word “prerequisite.” The classroom was not quite a football stadium, but it probably should have been, as there were hundreds of people, and the majority of them appeared to be on athletic scholarships. Resignedly, the professor nodded and continued.
I immediately fell asleep but was soon roused when the professor asked, “What is good? In other words, what constitutes what is good and right and decent? How is good defined?” Despite a relatively spirited discussion, it became clear to all that nobody could really answer the question adequately. Finally, the professor, no doubt through long experience in teaching “Introduction to Ethics,” posed the best answer of all the bad answers. He called it the “Schlitz Ethic.” This was named after a well-known brand of fantastically bad American beer, whose advertising at the time always concluded with the jingle, “When it’s right, you know it!”
When it’s right, you know it. Implying that, although it is difficult to describe the meaning of good, you know good when you see it.
Warily, the entire class of 250 or so agreed that this worked as well as any of the other answers. The professor grinned sadly, correctly assuming that few of us appreciated the irony of beer advertising in relation to a discussion of ethics, and assigned Plato for homework.
When it’s right, you know it. And stating the converse, when it’s not right, you know that too …
I will fictionalize names to protect the identities of those involved, because almost all of the people who were involved are still active in the IT world, although the corporations involved have long ago bitten the dust, gone to meet their makers, joined the choir invisible, and are now pushing up daisies.
If you think you recognize any person or corporation in this account, then you are most certainly mistaken. Just have two stiff whisky drinks and that feeling of recognition should pass …
Those of us who work in Information Technology (IT) have all been on a project where something important is just not right. We know it, most everyone knows it, but nobody is quite able to put his or her finger on the problem in a convincing way. Why I was not able to muster a convincing argument at the time I can only attribute to lack of experience. As an old adage states: experience is what you get immediately after you need it.
This story is about such an IT project, the most spectacular failure I have ever experienced. It resulted in the complete dismissal of a medium-sized IT department, and eventually led to the destruction of a growing company in a growing industry. The company, which we’ll call “Upstart,” was a successful and profitable subscription television business. Upstart at this time was a wholly owned subsidiary of a giant media conglomerate, which we’ll call “BigMedia, Inc.”
The project occurred in the early 1990s, and it was a custom-built order-entry and customer-service application, closely resembling what is now referred to as Customer-Relationship Management or CRM. The core functionality of the system included:
- Order entry and inventory
- Customer service, help desk
- General ledger, accounts receivable, billing, and accounts payable
The application was called “Vision” and the name was both its officially stated promise for Upstart as well as a self-aggrandizing nod to its architect. The application was innovative, in that it was built to be flexible enough to accommodate any future changes to the business. Not just any foreseeable future changes to the business, but absolutely any changes to the business, in any form. It was quite a remarkable claim, but Vision was intended to be the last application ever built. It achieved this utter flexibility by being completely data-driven, providing limitless abstraction, and using object-oriented programming techniques that were cutting-edge at the time.
Like many such projects that set out to create a mission-critical application, the development effort spanned two years, about a year longer than originally projected. But that was acceptable, because this was the application that would last forever, adapting to any future requirements, providing unlimited Return On Investment (ROI). When the application finally went “live,” almost everybody in the company had invested so much in it that literally the fate of the company hinged on its success.
However, in the event of total project malfunction, mission-critical applications running the core business of multinational corporations are not permitted the luxury of the type of fast flameout demonstrated by thousands of “dot-com” companies in the era of the Internet bubble a few years. Within a month of Vision going “live,” it was apparent to all but those most heavily vested in its construction that it was a failure. The timeline went as follows:
- Autumn 1991: Development begins-the “build versus buy” decision is settled in favor of “build” as Upstart decides to build a custom CRM application.
- September 1993: “Go Live” launch-hope and euphoria.
- October 1993: Frustration at poor performance and inability to produce reports.
- November 1993: Desperation.
- December 1993: Despair, IT director resigns.
- January 1994: Firings, IT department decimated.
- February 1994: New faces.
- March 1994: New replacement system chosen, the “build versus buy” decision is settled in favor of “buy (then customize).”
- November 1994: Replacement system launched, “Vision” retired.
- March 1995: BigMedia, Inc. sells Upstart to another company.
So, the entire debacle lasted three years, from inception to grave, generating an estimated ROI of minus $10 million. Such implosions remained rare until the spectacular supernovas when the Internet “bubble” burst in 2001.
Seeing and Vision
Vision was designed and implemented by a person called Randy. Randy was a vigorous proponent of object-oriented programming and modular programming languages such as Ada. From product development, he had moved into Oracle Consulting where he was regarded as brilliant but “controlling, sarcastic, and impatient”, according to those who worked with him. Apparently, there were no openings in marketing at the time.
Oracle placed Randy on an engagement at Upstart in 1991 during the initial strategy phase of the Vision project. It wasn’t long before Upstart realized that they had a systems architect with all of the answers. For each of their requirements, Randy had a solution:
Build an order-entry and customer-service application that integrates all of the elements of customer interaction
Upstart wanted a better end-user interface for their reservation agents. They also wanted that system to be capable of retrieving every bit of information about a customer while the reservation agent was interacting with the customer. Such systems now come shrink-wrapped, but in the early 1990s the “build-versus-buy” decision still came down firmly on the “build” side for this kind of functionality. Randy promised to build an application that could do absolutely anything.
Make it fast and reliable
Upstart spent a lot of money on its IT infrastructure. They bought the largest, most reliable servers available and the best database technology of the time (Oracle7). As far as Upstart was concerned, Vision would be fast, ipso facto, because it had the best server hardware and the best database software.
Build an application that is flexible enough to be adapted to meet any new requirements
This was the toughest requirement, and this was where Randy really impressed everyone. In computing, there is often a three-way trade-off between speed, reliability, and flexibility. This is truly the devil’s triangle of IT, where the conventional wisdom is that you can pick any two out of the three.
Vendor after vendor promised Upstart that their product could meet all of their present requirements, speedily and reliably. But changes? Future requirements? This is where the hemming and hawing started. The vendors would admit, with varying degrees of enthusiasm, to the presence of an application programming interface or API, allowing modifications and customizations to the application. How easy is it to change? Well, that’s what our professional services organizations are for!
It was not until Upstart talked to Randy that they heard a story that they liked, which was that all those existing products are built on the wrong technology.
Randy proposed building, from scratch, a completely data-driven application. Most conventional database applications are written in code using programming languages that access and manipulate data. Code is very difficult to change once it is written, but code controls data.
Randy’s idea was that all of the forms, all of the reports, all of the functionality of the system would be stored as metadata, or data about data, and the only thing coded in a programming language would be an engine to process both metadata and data. For example, instead of defining a table just to store order-line information, why not also store the basic business logic about inserting, updating, and deleting order-lines in the table, as well as details about the structure of an order-line. Once an engine was built, then that would be the end of all coding, forever and ever, amen. After that, the engine would process data about data then the data itself, and in this fashion any form, report, or program could be built. When new functionality or modifications were needed to the system, you only had to change the metadata, not the code.
Theoretically, this allowed rapid implementation of changes and new applications, because while changes to programming code required recompilation and relinking, changes to metadata had no such requirements.
And Randy delivered on these promises-well, most of them anyway …
The data-driven application
One of Randy’s inspirations for this design was early Computer-Aided Software Engineering (CASE), in particular Oracle’s own CASE*Tools product. In its early incarnations, this product included a table named SDD_ELEMENTS that contained both data and metadata indistinguishably-it was all just grist for the processing logic within the CASE product. The metadata was initialized as “seed data” by the product installation software, and then the “engine” used this as the basis for recursive queries back into itself when new data or metadata was being added. Randy commented that this recursive design was a “eureka” moment, an epiphany, for him and it came to fruition in the Vision system.
The idea was that nothing about the application logic was declared or coded. Not the business logic, and not the data structures for storage.
The pity is that Randy did not temper that epiphany with some real-world practicality. For example, what works very well for a CASE application supporting 1-5 developers who think a lot and type only a little may not work well at all for a mission-critical order-entry application supporting 300-400 reservation agents who think very little and type very rapidly. The database workload generated by a small number of users generating transactions intermittently has little in common with the database workload generated by an order-entry application. Just considering the growth in the volume of data between the two situations reveals striking differences.
Recalling the three-way design trade-off between speed, reliability, and flexibility, a CASE application needs flexibility the most in order to capture the most complex ideas and designs. It needs reliability and robustness as well in order for the application developer to trust it enough to use it. It does need to perform well, but with such a small user community, speed is probably the least important side of the triangle.
For an order-entry, inventory, customer-service, and financial application, reliability and high availability might be the most important characteristic of all, as the mission-critical application that is not available or which loses data is worth nothing. Next, such a system has to perform well, as the lack of speed for the dozens or hundreds of concurrent users can also render the application unfit to fulfill business requirements. Of the three qualities, flexibility may well be the least important attribute as far as business users are concerned.
The Vision systems best quality, flexibility, was least important to Upstart as a company, despite its attractiveness to Upstart’s IT division. It was reliable enough, but it did not have the speed to fulfill its business requirements.
This is a classic example of what can happen when the priorities of the IT “department” do not match, and are not appropriately aligned with, the priorities of the “business.” Often what appears desirable to the propeller heads and what is seen as a high priority (e.g., using cutting-edge technology, flexible, database independence, etc.) may implicitly conflict with the priorities and business requirements of the company. When IT decisions are made in a vacuum from the business rather than considering the wishes of the business foremost, these sad cases become an unfortunate reality.
When it’s not right, you know it . . . or not?
In addition to mixing metadata with data, the Vision system had another peculiarity that made it truly memorable, at least to those with any experiences building databases.
The Vision system was comprised of a single table, named DATA, appropriately enough. When you consider the overriding goal of complete flexibility where all rules are interpreted at run time, it seems inevitable that every involving “structure” would also be made as generic and undistinguished as possible. Why have multiple tables when there is no differentiation of data structure? Instead, just pile all of the 150 or so different logical entities into the same table. Plunk the various items of data into generically defined columns-the application data itself contains the metadata identifying each item of data and how it should be stored and displayed.
The basic premise was that just about all of the features of the relational database were eschewed, and instead it was used like a filing system for great big plastic bags of data. Why bother with other containers for the data-just jam it into a generic black plastic garbage bag. If all of those bags full of different types of data all look the same and are heaped into the same pile, don’t worry! We’ll be able to differentiate the data after we pull it off the big pile and look inside.
Amazingly, Randy and his crew thought this was incredibly clever. Database engineer after database engineer were struck dumb by the realization of what Vision was doing, but the builders of the one-table database were blissfully aware that they were ushering in a new dawn in database design.
Here are some facts about the Vision system:
- The data model comprised a single table named DATA.
- The DATA table had 240+ columns.
- The primary key column was a numeric named SYSNO.
- Columns existed for attributes, such as TYPE, SUBTYPE, CATEGORY, SUBCATEGORY, STATUS, SUBSTATUS, GROUP, and OWNER, which were intended to fully describe what type, category, or grouping to which the row belonged. Each of these columns were themselves SYSNOs, joining back to other rows in DATA for more detail.
- The majority of columns in DATA provided sequencing, descriptive text, names, values, amounts, dates entered and modified, and so on. Some of these columns would be named and data-typed appropriately, while others were “generic spare” columns, several for each datatype.
- When the Vision system was finally decommissioned, a year after it went into production, the DATA table consumed 50GB of space.
- 40+ associated indexes consumed another 250GB of space.
Suppose you wanted to find a Customer record. To find that information, you first needed to retrieve metadata describing the structure of a customer entity from the DATA table. This might involve first a query on DATA to retrieve a row describing one of about 150 different logical “entities,” then a row describing another specific entity of Customer. Then, it would be necessary to use the information retrieved so far to query DATA for rows related to “entity” to describe “columns,” so that we know in what column on the DATA table the COMPANY_NAME and address information is stored within.
Once all of this metadata has been retrieved, we are ready to start querying the DATA table for a specific row for a specific customer. Using all of the metadata previously retrieved, we know how many of the 240 columns in data are populated with customer data, what type of data they should be, what the acceptable ranges of values are, how the numeric or date data should be displayed, and so forth.
A SQL query for a customer record in a conventional database application might look like this:
where company_id = <company-ID>
In the Vision system, a similar SQL query might look like this:
from data c,
where c.description = ‘CUSTOMER’
and cc.entity_id = c.sysno
and cc.description = ‘COLUMN’
and ma.column_id = cc.sysno
It was not unusual for such a simple query, intended only to retrieve information for one customer, to perform 6-10 recursive “self-joins” back to the DATA table, over and over again. More demanding queries, for instance those retrieving several customer records, or order details, required at least 12-15 recursive “self-joins.” Every operation had first to retrieve the metadata about the structure of the data before retrieving or modifying the data itself.
In real life, when a fire department receives an alarm about a fire in progress, the nearest fire station is notified and a crew is dispatched to handle the situation. The advantage of this situation is that both the dispatcher and crew handle the situation automatically and quickly. The disadvantage of this situation is that if there is a change in procedure, it may take some time to retrain the personnel to make them accustomed to the new procedure.
If this were instead handled the way the Vision system operated, each alarm would cause the fire dispatcher to check his or her training manuals on what to do when an alarm is received. Then, upon receipt of a dispatch, each fire crew member would have to check his or her training manuals to determine what to do when handling a fire call. Then, the crew would go handle the situation, carrying training manuals with them for any eventuality. The advantage of this approach is that each alarm would be handled the way the training manuals dictate, so that changes in procedures can be instantly implemented. The disadvantage, of course, is that you have dispatchers reading manuals when they should be dispatching, and fire crew members reading manuals when they should be fighting fires. Chaos in the streets, as trucks careen wildly while their drivers consult manuals instead of keeping their eyes on the road. Ah, I feel a Hollywood screenplay coming on …
To illustrate its remarkable flexibility, the entire Vision application consisted of a single form. This single form utilized code extensions called “user exits” written in C which called the Vision “engine.” This single form was therefore able to display and accept user input for any of the 150+ data entities that comprised all of the modules of the Vision system.
So, the Vision engine was able to generate these SQL statements very easily, because it was coded to piece together metadata to retrieve the data. However, each invocation of the form required that the relevant form be generated afresh. If 300 data-entry operators invoked a form, the logic to build the form was generated 300 times. If operators hopped back and forth between screens, then all of this form-generation logic occurred again and again and again. Please note that changes to order-entry forms tend to occur rather infrequently, so you have to wonder if this incredible flexibility was a solution in search of a problem.
For human beings, the situation was dire. What the Vision application’s object-oriented “engine” did very rapidly and easily did not come quickly to the carbon-based meat-processors in the skulls of developers. For example, developers would be asked to compose a report about customer orders or summarize monthly sales. Sounds straightforward enough, but before retrieving the data, they had to retrieve the instructions on how to retrieve the data. Once they retrieved that logic, then they had to retrieve the structure in which data is stored, for each individual item of data required. Using simple, basic ad-hoc query tool or reporting tools (such as Oracle’s SQL*ReportWriter or Crystal Reports) ) became unbelievable drudgery, bordering on the impossible. Reporting tools that generated their own SQL, based upon expectations of a standard normalized data model, were completely out of the question. In order to shorten the number of levels of recursive logic, developers ended up hard-coding as much of the metadata as they could, in an effort to query only the data. This expediency drove Randy out of his mind, as he rightly perceived that all of Vision’s flexibility was slowly and surely leaking out in each instance of hard-coded metadata.
The DATA table eventually grew to 50GB in size. Oracle7 did not have any partitioning, so as the table grew larger, every access became correspondingly slower. Using the 2,048 byte database blocks which were common at the time, most of the indexes on DATA grew to 5-6 branch levels, which made navigation of the branch levels expensive, even for UNIQUE scans into the index. Moreover, the clustering of data values in the table with regard to their value in the 40+ indexes was almost uniformly poor. The ordering in which rows were inserted into the table only matched the ordering of the data values on the SYSNO column, so that was the only index which had a good clustering factor of values in the index compared to the rows in the table. This meant that the index on the SYSNO column was the only index that was well suited to RANGE scans across the leaf blocks of the index. Unfortunately, since SYSNO was the primary key of the DATA table, a UNIQUE probe for individual rows from that index was the most common operation. So, the only index suited for RANGE scans was also the index that was hardly ever used for RANGE scans. All of the other columns against which RANGE scans might be performed were huge, with many branch levels from the “root” node to the “leaf” nodes, and they were poorly suited to scanning for multiple rows.
Moreover, because the DATA table contained over 150 logical data entities, and some of those entities were frequently updated and deleted from, all of the 40+ indexes had an enormous number of deleted entries. These “holes” in the index entries are due to the fact that Oracle does not remove an index entry, but rather enables its possible future reuse by simply nullifying the pointer to the table row and keeping the data value portion in its slot. Thus, as rows were deleted and updated in the DATA table, most of the indexes were becoming more and more sparsely populated over time.
Because the entire application was encapsulated inside that table, and because much of the data-driven logic contained within DATA was recursive, we noted that the growth rate of DATA was exponential. The phenomenon seemed to resemble the growth of the human population of the earth: as more people procreate, more people are added more rapidly, which causes more people to procreate, which causes more people to be added even more rapidly, which causes even more people to procreate, (and so on) …
So, DATA itself was growing exponentially. Its indexes were mirroring that growth, but growing even faster as deleted entries compounded the problem. The Sequent server on which the database resided was among the largest in the United States at the time, but it was quickly being gobbled by this fast-expanding database.
It was at this point that I was called in.
My first recommendation, upon being consulted by Upstart to help improve performance, was to break all 150+ logical entities into separate tables, similar to any other conventional database application built on hierarchical or relational databases. This consultation occurred about a month prior to Vision going live, and the Upstart project managers smiled indulgently and commented, “Every database consultant who comes in here says that.” The implication was, “Tell us something new, because that’s not going to happen.” They went on to explain that this application was something new, and that the way it stored data and metadata in the database would revolutionize computing. I listened carefully, because the customer is always right, but in the end, I had nothing else to offer, and the interview was ended. Randy was not present at that initial meeting, as his faithful were well able to deal with the infidel.
Four months later, after Vision had been in production for three months, I was called back in to help improve performance, which was crippling all areas of business within Upstart. Again, I looked at what the application was doing and how the DATA table was growing, and repeated the same conclusion: this application needs a more conventional data model, with each logical entity stored in its own relational table.
This time, the reception was a little different. Randy was present at this meeting and repeated what had been said by his minions on my previous visit: it was impossible to break up DATA. Vision’s design was revolutionary, so be clever and think of something else. Change some database initialization parameters, or something. After all, the poor performance was Oracle’s problem, as the Vision application “engine” was operating flawlessly, so the problem had to be Oracle.
I noticed that the IT director who had been present at the last meeting four months earlier was missing, and found that he had resigned only two weeks prior. The new IT director was present, and she was obviously not one of Randy’s faithful. In fact, I couldn’t help noticing that even the faithful were not presenting a solid front anymore, and that Randy seemed isolated. So, I pressed on with my litany of the problems with Vision.
Too many indexes
The DATA table had over 40 indexes, so each INSERT statement had to insert over 40 index entries for each table row inserted and each DELETE statement had to delete over 40 index entries for each table row deleted.
When modifications were made to existing data using Oracle’s SQL*Forms v3.0, the program had a nasty habit of including all of the columns in the table in the resulting UPDATE statement. So, even when just one column was modified in the form, the resulting update would still attempt to change all of the columns maintained by the form. As a result, each UPDATE statement performed modifications on all 40 related index entries for each table row updated
Each UPDATE and DELETE made each of the 40 indexes more sparsely populated, making them less efficient for just about every type of index access method.
Too many branch levels
With the 2,048 byte database block size that was the norm at the time, each of these 40 indexes had four, five, and sometimes six levels of B*Tree branches. Thus, the simplest use of any of the indexes was several times more “expensive” than they needed to be, as all branch levels had to be traversed from the index’s “root” node to one of the “leaf” nodes on each access.
Breaking the DATA table up into 150 smaller tables would allow the indexes on the smaller tables to have fewer branch levels, making each access more efficient.
Index access contention
One of the 40+ indexes, the unique index supporting the primary key column SYSNO, was populated with numeric values generated by a sequence generator. Likewise, a column named TIMESTAMP was populated with the current system DATETIME value. Both sequence values and datetimestamps are monotonically ascending data values which present special problems for indexes
Therefore, the index “leaf” block containing the highest values was always being fought over by all of the sessions inserting new rows, representing a knot of contention within the application that is difficult to resolve.
Could not exploit caching
Oracle7 had introduced a new CACHE command for caching individual tables into memory. With only one table in the application, it would be impossible to cache frequently used logical entities into memory.
Users could not write queries against DATA
It was too difficult and complex for humans to write queries on the DATA table, with the need to join back to DATA again and again in order to retrieve structural metadata before getting around to the actual purpose of the query itself.
All developers were now short-circuiting these recursive self-joins by memorizing specific SYSNO values and hard-coding specific SYSNO values into their queries. This had the effect of reducing the number of recursive self-joins in the query, allowing them to run and complete, but it also had the side effect of eliminating all of the flexibility designed into the Vision application.
As I described these shortcomings, Randy kept shaking his head, muttering about what a piece of junk the Oracle database was, what kind of fools complained about the complexity of a revolutionary data model when writing queries, and so on. Finally, he simply stood up and walked out of the meeting.
It was the last point, the least technical point, which resonated with everyone. Each of the previous points were technical mumbo-jumbo, but the point about humans not being able to write queries struck a nerve.
Death of a vision
Also present in the meeting was a sales representative from a market research company, whom we’ll call “Joe,” who had been trying to sell his market-research software to Upstart. Joe knew that he would have to extract data from Vision, as his application was an Executive Information System (EIS), which is more commonly referred to as a data mart today.
Joe mostly sat and listened throughout the meeting. After I finished my laundry list of problems with Vision, and after Randy stormed out of the room, he finally spoke. He asked one of the Vision project team leads, one of Randy’s faithful, whom we’ll call “Rick,” whether Vision was indeed as complicated as I was making it out to be. Rick proudly answered, “Yes, it is.” Joe then asked how long it took to train an application developer to implement changes in functionality, and Rick frowned, thought a moment, and then answered, “Six months.” Joe looked thunderstruck and slowly repeated, “Six months?” Rick explained that they had just finished bringing a new application developer up to speed on Vision, and it took six months. “He was really bright, too!” Rick said pensively. Joe nodded and then turned to the new IT director.
He stated, “You are in a world of hurt. It takes six months to train a new maintenance person and a database consultant,” waving at me, “says that the simplest of reports can only be created with the greatest difficulty using the most low-level reporting tools.” He stood up, walked to the white board, picked up a pen, and wrote:
“RUN LIKE HELL!“
Joe said, “You people,” waving at the accounting director, “have been operating in the dark for over four months, unable to get real financial data from Vision. You haven’t done a valid month-end closing since this new system started.” The new IT director looked quickly at the accounting director, who looked a little sheepish as he explained, “We’ve never been able to get our reconciliation data.” Rick quickly interjected, “It is pretty difficult to get that report out.” Joe smiled and continued, “It would be a waste of your money and my time to have you buy my market-research application, because you would not be able to feed it any data either.” At this, the marketing director straightened up, looking alarmed. I didn’t realize it at the time, but this was the decisive blow that killed Vision.
That night, I took Joe out for drinks and we had a great time as he filled me in on all the undercurrents I had missed or failed to understand. Techies can be pretty pathetic at that stuff, but occasionally folks take pity on us, and I realized what I wanted to be when I grew up.
Another thing that the good folks at Upstart didn’t fully realize was that Randy had his own vision. Although he had been introduced to Upstart as an employee of Oracle Consulting, he quickly cut them out of the picture by forming his own company and taking the entire engagement for himself. This is considered extremely bad form in any contracting business, where the basic ethic is “you dance with whom what brung ya!” Customer contacts are the life-blood of the IT contracting business, and contracting companies consider it a capital offense to steal a customer contact, and rightly so. As soon as it appeared that Upstart was leaning toward his ideas for the new application, Randy bailed from Oracle and formed his own business, which we’ll call “MetaVision.”
MetaVision and its founder signed a contract with Upstart to develop the Vision system for Upstart, but MetaVision managed to reserve for itself the source-code rights to resell Vision as it saw fit. This was an extraordinary concession made by Upstart, as works for hire are usually owned by the paying customer. How Randy managed this sweetheart deal is another mystery, but undoubtedly he felt that his design for a completely flexible design for Vision was innovative enough that he could not part with all rights to it. Additionally, since Vision was billed as the “last application ever to be built,” Randy could have argued that he was putting himself out of business with this development effort, so he needed to retain rights.
Over the course of the two-year project to develop Vision, Randy charged Upstart $200/hour as the application architect, grossing an estimated $800,000 or so in billings. In addition, he hired two junior programmers and billed them to Upstart for $100/hour, grossing another $800,000 or so in billings over two years and then some. Since the two junior programmers were being paid around $60,000/year apiece (good money in those days!), Randy was doing quite well for himself, making over $500,000/year. For the few months that Vision was in production before the entire development team was fired, Randy continued to bill Upstart at the same pace, and even attempted to sue for breach of contract after he was fired.
All in all, Randy had found a lucrative gravy train, allowing him to bill well over $1.5 million over two years, while he got to experiment with his new ideas, and then keep the fruits of his labors to resell to other customers to boot. After the Vision project was cancelled, Randy dissolved MetaVision and went trekking in the Himalayas for several months. In the ten years that have elapsed since, I have not seen or heard anything about him, although I have been watching. I recently found the slides from a presentation from someone with his name, dated in 1998, on the practical aspects and technical challenges of operating a porn website. It seemed to fit, and the author was quite adamant about the advantages of copyrighting images even if they were stolen. He also recommended strongly that content management be as automated and flexible as possible, touting the software he developed for his own website. I’m sure that he’s making as much money as ever.
Looking back at Vision
After the decision was made to replace Vision in January 1994, Upstart conducted a search for a replacement application. Analysis was initiated to rewrite Vision into a more conventional data model, but Upstart executives squashed “Vision II” before it ever got off the ground.
The software selection process was a repetition of the original selection process that had occurred two-and-a-half years earlier, which had resulted in the decision to develop Vision. Instead, this time the runner-up, a tried-but-true yet aged application based on Digital VAX-VMS and RMS (instead of Oracle) was selected. It was the complete antithesis of Vision and I was sure that Randy was aware of it.
While the “new” application was being deployed, I had imagined that I would disengage from Upstart and find a new engagement, but a fitting punishment was being devised for me also. Upstart and BigMedia, Inc. arranged to have me manage Vision as its new database administrator until it was retired, even though I had no prior experience as a DBA. Thus, my career swerved into database administration, away from application development, a detour I have yet to correct.
In November 1994, the new application finally went “live” and Vision was decommissioned. During those nine months, I learned hard lessons in 24×7 production support, managing “hot” backups and archived redo log files, managing space in the database as well as file systems and logical volumes, working with Oracle Support and applying patches, and database recovery at 4:00 a.m. I worked 20-hour days, 120-hour weeks, slept under desks to avoid direct lighting, ate the worst that vending machines could offer, and picked up all kinds of other bad habits that persist to this day.
The main lesson that I took away from this fiasco was that sometimes the cleverest people made the most fundamental errors, losing sight of the ultimate goal. The ultimate goal for the Vision system was to create an integrated order-entry and customer-service application to meet the needs of a fast-growing company, not to change the world of IT and usher in a brave new world of programming.
I sometimes wish that I too had heeded Joe’s admonition to “RUN LIKE HELL” but at other times I realize that this job really does beat working for a living.