There is no doubt, in the past few years an explosion of AI artifacts have flooded our daily lives.
Terms like Machine Learning (ML), Large Language Models (LLMs), Reinforcement Learning and such stepped out of the academic/industry niche shadows into the light of our daily life's vocabulary.
It seems each day a new milestone is reached, new apps that correct your grammar, teach you a foreign language and tag your friends on photos flood the digital market; and the apps you were familiar with, now are "ML enhanced".
Even your favorite phone brand now is likely to sell phones with specialized hardware to accelerate your ML enhanced apps.
The idea of machines more intelligent or at least as intelligent as humans have been in the collective consciousness for quite some time.
Like the idea of an alien race coming to conquer Earth, intelligent machines were mostly restricted to art (literature, cinema, etc.).
However, we were not alien to the adjective smart used in conjunction with technology.
Terms like Smart Home and Smartphone have been around for a few years now.
They evoke a house that is more than a house, a house that is controlled remotely, that is smart but under your control; nevertheless something more than a house.
The same applies to the Smartphone concept, it is a phone but much more, you can control your calendar, watch movies, play games, etc. is as if our normal cellular phone was smart.
The idea of the computer being smart has been lurking for decades, it is difficult not to call smart something that can do in a split of a second what would take you days to accomplish.
But at the end of the day, in the comforting warmth of our beds, we could sleep peacefully because we knew those machines were not smart, they just seemed to be.
Two years ago (or so) that belief was put to the test.
When ChatGPT made its appearance, smart machines seemed to not be a myth anymore, a literary creation, they were very real, there was a web page you could access and chat with what seemed to be another human being.
We always look for the next funnier video, for the next viral challenge, it only stood to reason that we would look up for the next big-step of this new found technological marvel.
We were not disappointed.
Soon evaluations on the intelligence of such systems were underway, and claims as bold as ML models reaching human level intelligence were made.
Even the Godfather of AI spoke out about the dangers of chat-bots after quitting his job at one of the most prominent companies.
I have enough pop-culture to know that you don't mess with The Godfather.
And so, the politicians heeded his call and the call from other intellectual and technological authority figures.
They rolled up their sleeves, burnt the midnight oil and worked on limiting what was claimed to be a Clear and Present Danger to us all as humankind.
Of course, the next logical step was to claim that a digital super-intelligence was underway.
Digital Super-intelligence is an obscure term that seems to be quite intuitive. To me, an atom in the digital universe, it is incredibly abstract. But I am conscious enough to realize that Super-intelligent means more intelligent than me. To convince us of this fact several studies showing AI models achieving better grades than humans at different academic levels have been published. I will not dwell into these reports as I can not determine what makes a good lawyer or how you determine a lawyer or a personal assistant's competence. However, I will use a bit of simple intuition to try to make sense of this. As a society we love to classify and rank, it helps us to cope with the world's complexity. We classify ourselves in social classes, our educational institutes are classified and as students, we are ranked and classified. It is not surprising that we seek to be as high on the hierarchy as possible, because it is perceived that, the higher on the ranking, the broader our options are. A better school makes a better resume, better grades make better chances to be accepted in a program or to be considered for a job and so on. The ladder's height is also associated with intellectual requirements. The higher on the ladder, the more intellectually-demanding the activity is, for example, the level of intellectual wit required to steer a multinational company is not the same as the one we need if we seek to work as housecleaning staff. I don't know what it takes to be a CEO or a recognized prizewinning academic for that matter, but I do know a little bit more about house cleaning. It only stands to reason that, if I'm facing a Super-intelligence, then cleaning my house should be a crumb of cake for such entity.
I come from a middle-class family, I have memories of my family hiring house-cleaning services since I was a child.
However, my mother raised me to know how to perform various house chores, in particular, how to clean a house; you can't lead without knowing what you want, how you want it and how you can make it possible.
So, I learned how to iron my clothes when I was around 9 and when I was 13 or so I woke up at 4 am in school days to vacuum the carpeted top floor of our house on my hands and knees with a not-quiet-at-all vacuum cleaner.
On weekends I did the laundry with a strong focus on water reuse.
Since I was 10 I learned a bit how to cook and was the de facto helping hand on groceries shopping for my grandmother since I was 6.
Nowadays as a grown-up man, I vacuum the apartment each third day or so (the perks of being an adult) and I vacuum not only the floor, but the bed, the desks and some of the bookshelves we have, not all of them because that would take me too long, on average I devote one hour and a half to such chore.
Vacuum cleaning must be done in a specific order, and with care, as a good Mexican family, my girlfriend and I have enough family relics that are fragile and must be handled with care.
Arguably, as a society we consider house-cleaning to be a job less intellectual than most jobs, therefore, it should be a low hanging fruit for a digital God.
If we stepped from a simple chat-bot to an entity with college grade intelligence in the blink of an eye, then it should be just another blink of an eye to have a cheap, fully functional robot that is able to perform a task as mundane as vacuum floors, doors, beds, sofas, etc.
Why do I require such an artifact?
The answer is simple, evaluation.
I can not hope to be able to evaluate an artifact with such complex goals as human level intelligence.
It is simply too complex, too abstract, it does not tell me anything the model has been trained with all the books in a given library or with tons and tons of data scrapped from the Internet, I'm a Computer Scientist with a big imagination, but this escapes my imagination.
How can I test for something that has been trained with a dataset that I can not even begin to comprehend? How can I make reliable general conclusions about its intelligence? How can I prove for its divinity?
I can test on what I know, and so, I propose the vacuum cleaner test.
If an offspring of this Super-intelligence can vacuum my apartment exactly as I would do, then I would consider there is a chance the offspring has the required intelligence to clean houses.
My argument seems flawed, it is obvious there is much more into the making of a robot for such a task than I have laid on.
It is too simplistic but not more simplistic than the statement made by Erik Horvitz at an interview with Brian Greene in which he claims new knowledge is created by only combining pre-existing thinking patterns and methods.
I will dwell on this later on.
Now, I would like to explore how this new found technology seems to have taken such deep roots this quickly.
Since its birth, Computer Science has been aware of its limitations, as computer scientists we know the set of problems that can be solved effectively by an algorithm is minuscule compared to the set of all problems.
It is unknown how many (unique) problems can be solved effectively by an algorithm but the reality is: it is not easy to create an algorithmic solution, an even when there is one, its memory or processing properties might not be appealing for real-world applications.
Take for example the Traveling Salesman problem: The goal is to find the shortest tour through a list of cities if you know the distances between them.
The Traveling Salesman problem is a well known NP-Hard problem, basically, finding the exact solution would take a lot of time.
However, approximation algorithms and heuristics exist that ensure a good approximation to the optimal solution.
An algorithm is an explanation, not in natural language but in a procedural one.
Follow these steps, and you will get your solution.
Each step transforms the data towards the solution.
These steps can be described with different degrees of detail, from the abstract sort this list of numbers to the concrete cmp r8,r9.
And equivalent steps can be interchanged.
That is the core idea.
An heuristic is a semi-explanation in the same procedural language.
Follow these steps and you might get a solution at the end.
An heuristic can be tweaked and when the person do the tweaks, an idea of the effect of those is known at the time of the modification.
But problems that any elementary school student can solve easily, have not known algorithm nor efficient heuristic, one example is: character recognition.
Character recognition is fundamental to write and read, in our daily lives we recognize a thousand different characters written in a variety of fonts.
From our hand-written TO DO list, to the newspaper, we are able to recognize the letters one by one.
For this simple task there is no know algorithm nor heuristic that can ensure an outcome.
We do have, however, license plate readers for speeding tickets, toll roads, etc. that are mostly automated.
We also have picture-to-text applications, in which a picture of a text is given to the program and the program translates that picture into an editable text document.
An even more, text-to-speech applications are now common use.
It is undeniable the usefulness of these applications, text-to-speech can help people with eyesight weakness to have access to texts that otherwise would be denied.
License plate readers for speeding tickets should allow the police department to assign the policemen whose job were to watch speed limits compliance to perform more complicated tasks.
There is a catch here tho, all these applications do not run algorithms offering a guarantee, they run something obscure, they run on Neural Networks.
On one hand, it is difficult to come up with an algorithm for a given problem, and the well of these problems seems to be drying, on the other hand, as we have been exploring what problems are in the well, a lot of other problems have been classified out-of-the-well, but these problems have real-world applications and a very real, real-world hunger for them.
It turns out that, for those out-of-the-well problems, constructing heuristics is a very challenging task, what heuristic would you construct to recognize handwritten characters in your native alphabet?
How would you describe what a letter a is?
An algorithm is an explanation, an heuristic a semi-explanation, a neural network is a magic trick the magician can't explain.
The overall, super-charged AI wave that I see everyday on online social media and news outlets prophesize the coming of new era, the new revolution of AI.
In a way, seeing, hearing and reading all this hype reminds me of the song written by Bob Dylan The Times They Are A-Changin' in 1964.
Written in a very different social context with a very different intended message, I can't help but draw some parallels, to me, the very first strophe exemplifies the hype.
Come gather 'round people
Wherever you roam
And admit that the waters
Around you have grown
And accept it that soon
You'll be drenched to the bone
If your time to you is worth saving'
Then you better start swimmin' or you'll sink like a stone
For the times they are a-changin'
The sense of imminence in Dylan's song is as strong as the one we hear in the news or read on social media.
You have to get on the AI boat or you will be left behind, God forbid, you will be out of a job because of AI.
If you don't get on the AI boat now, then when it is at full speed you will be just a helpless observer of the revolution.
I'm not only talking about apps development, or industry adoption (which I admit I know a minuscule bit), it goes even further, academic adoption.
Soon after ChatGPT made its debut, one of the most important associations in the computing community, the ACM (Association for Computing Machinery) laid out guidelines to its use in academic production.
I must admit I was surprised when I learned about this document, I expected ChatGPT's debut would be taken as a nice technological addition worthy of being explored to unveil its secrets but not as a tool to be used, right out of the box.
It turns out that there were papers submitted in which a significant portion was written by ChatGPT and there were even conference judgments of papers (reviews in the lingo) written by ChatGPT instead of the reviewers!
To me, this was astonishing!, why those that pursue knowledge by pushing the boundaries of what is known would accept to use magic tricks in their craft so, willingly?
The versions vary, from using a prompt to write or evaluate a paper, to using it just to polish writing.
In the later case there has been a quite famous and successful application for quite sometime so I fail to see the need to use ChatGPT as such but, well, could be.
In the former case, the question would be why use it?
There has been a boiling challenge (not to label it crisis, but I might as well can) within the computing academic community. Its root is at the ancient question: How do you evaluate an academic's performance? is it for the number of papers published, their venues and the number of citations? the number of courses taught? what the students have to say about them? how much external funding the subject can attract?
Once more, we need to classify to cope with the complexity, in this particular classification the answer is: you consider all of them and the higher on each category the higher in the hierarchy.
It is a simple metric.
For each category there are recipes to boost your rank.
To attract external founding you can take courses designed to teach you how to address that particular audience and rely on peers that had gone through the same experience.
To have a better review from your students, you engage them in your course, and you engage them with the technologies they are familiar with, among other tactics.
To teach more courses you polish the ones you have taught and free-up time for the new ones.
But, what about to write more papers? what about getting more citations?
What is the recipe to have more fresh ideas and for those to take root into the community?
It is not because we are academics that we want our ideas to be heard and taken into account, that is common across all disciplines whether they are intellectual or not.
But, when your evaluation in the form of more is better depends on it, how do you tackle this challenge?
It was not surprising when scandals of Citations Cartels broke off, I cite you and you cite me, quid pro quo, or Authorship Sales Spot open for an author of an accepted paper at X for Y money. To name a few.
Knowledge is a merchandise in a world that keenly educates more and more people to produce this merchandise.
But if you don't want to fall into those unethical depths, how can you hack the system?
One possibility is to be part of a network and whatever paper the network produces all are authors.
I think this is a bit of gray area, but the justification I find the most appealing is: For the network to work all have to pull, some with ideas, others with writing, others with founding, if one piece is missing then the network fails, and so, all are authors of the work.
When this idea is taken to the lengths of having people in the network just because their prestige is when it goes dark side.
This has not solved the fundamental problem of having ideas, and not just ideas but good, interesting ideas, whatever that means.
Well, it turns out that we have a bag full of unapproachable-by-traditional-means ideas that we know are interesting for real-world applications.
We have a new well that is not dry at all.
But to approach those ideas with the opaque ways, we need data.
In the world previous to the Internet, when Neural Networks were conceived having data was a dream, even having a machine to execute those ideas was a wild fantasy.
But that was then, and today we have both, we have a vast knowledge base, data has been the currency for a couple of decades now and we have fast electronics.
We have electronics meant to make things look pretty and realistic on the screen, the irony seems to be that those same electronic components now are being used to make the computer look realistically human to us.
The stars seem to have aligned, now you only need data, a very hunky machine and an incommensurable dose of patience and diligence.
The first two require money but the last two, well, academics do not lack of them.
And so, a new wave of papers addressing those hidden away problems with data and energy hungry opaque approaches washed over the academic shores.
What must be done now is to tweak the base model, but the tweak can yield truly unexpected results, so the trick is to tweak correctly.
And perhaps equally important (or more) is to procure data, and not any data, it must be curated, labeled, good enough for the model to work.
Then, the magic happens in the electric pathways of the machine.
To finally have a series of numbers that promise to solve your problem.
The paper follows and if you were the first to solve that problem, a good number of citations are almost assured.
Putting aside all the energy required to realize the idea, for example, Microsoft considering resurrecting the Three Mile Island nuclear reactor to have more energy.
The same reactor that just by the grace of God did not end up in a tragedy.
What did we, as a knowledge pursuing community learned about those problems after all the human creative effort?
The answer is sadly, not much.
Perhaps we learned some insights after analyzing the mountains of data, but the core problem still elude us.
Are we shifting from a community that explains problems to one that tosses charged die and then interpret the outcome of a throw? Like modern dowsers?
Our new developing technology seems to be our modern Mr. Tambourine Man (Bob Dylan 1965)
Hey! Mr. Tambourine Man, play a song for me
I'm not sleepy and there is no place I'm going to
Hey! Mr. Tambourine Man, play a song for me
In the jingle jangle morning I'll come followin' you
...
Take me on a trip upon your magic swrirlin' ship
My senses have been stripped, my hands can't feel to grip
My toes too numb to step
Wait only for my boot heels to be wanderin'
I'm ready to go anywhere, I'm ready for to fade
Into my own parade, cast your dancing spell my way
I promise to go under it
...
Then take me disappearing' through the smoke rings of my mind
Down the foggy ruins of time, far past the frozen leaves
The haunted, frightened trees, out to the windy beach
Far from the twisted reach of crazy sorrow
Yes, to dance beneath the diamond sky with one hand waving free
Silhouetted by the sea, circled by the circus sands
With all memory and fate driven deep beneath the waves
Let me forget about today until tomorrow
Hey! Mr. Tambourine Man, play a song for me
I'm not sleepy and there is no place I'm going to
Hey! Mr. Tambourine Man, play a song for me
In the jingle jangle morning I'll come followin' you
But I'm not one to lay judgment, after all I ended up studying Computer Science because
my computer and its baudly link to the big world was my Mr. Tambourine Man.
In 2000, during a trip to my local CD music store I came across with a new album: WYSIWYG by Chumbawamba.
It was love at first sight, I can't explain exactly why but I loved that album.
By that time I was already big-time into computers and my favorite song of that album was Pass It Along, a plain critique to an over-technological numb society.
I have kept it with me constantly, but recently it took a new meaning to me.
Not so long ago from the time of this writing, I became aware of a paper claiming the usefulness of LLMs to suggest novel research problems.
Basically, researches developed an LLM system tuned to suggest research problems, then experts were summoned to also suggest novel problems, and both sets of problems were blindly judged by experts.
It turns out that, the problems suggested by the LLM system were ranked higher than the problems suggested by humans.
It was not a surprising paper, many voices, financially and academically oriented had voiced the use of LLMs to suggest academic development routes.
Research is the art of asking questions and seek answers.
Sometimes satisfactory answers can be found, sometimes those elude us but the corner stone is to not stop asking questions.
In Computer Science an (unconscious) linearity in the asking questions process has been voiced by colleagues.
Opaque evaluation criteria like interesting/exciting problem drives this linearity.
Linearity is not intrinsically incorrect, it gives a sense of direction, however, the problem, as always, is when it becomes the only driving force: there is no other route but what is considered to be exciting.
I think there is a more primal aspect to this: give people more of what they want.
It is natural that if LLMs can give us more and more problems that we consider novel and interesting, then their use will increase.
As long as the demand keeps growing, the supply will be there, it is just a matter of energy and it seems we are keen to harvest it.
And a researcher is mostly as worthy as the papers in top-tiers venues they produce.
But back to Pass it Along, I took the liberty to intervene it a little bit.
The parts between square brackets [...] are the original lyrics
Send this song to twenty people
Add your name, don't break the cycle
Pass it along by word of mouse
Save the world, don't leave the trend [house]
Because a sure problem [virtual office] in a sure trend [virtual home]
Means you'll never have to drive through the wrong part of research [town]
Pass it along by word of mouse
Save the world, don't leave the trend [house]
Ah-ah, what do you want to write today
[Ah-ah, where do you want to go today]
Ah-ah, something that won't get rejected
[Ah-ah, somewhere you could never take me]
Ah-ah, what do you want to write today
[Ah-ah, where do you want to go today]
Ah-ah, something that won't get rejected
[Ah-ah, somewhere you could never take me]
So, here's your final resting place
Your heaven is protected by security gates
Shut out your curiosity [the world], it's getting worse
Save yourself, don't leave the trend [house]
Because a fulfilling problem [happy future] is a thing of the past
And there's always another repeat (repeate)
Shut out your curiosity [the world], it's getting worse
Save yourself, don't leave the house
Ah-ah, what do you want to write today
[Ah-ah, where do you want to go today]
Ah-ah, something that won't get rejected
[Ah-ah, somewhere you could never take me]
Ah-ah, what do you want to write today
[Ah-ah, where do you want to go today]
Ah-ah, something that won't get rejected
[Ah-ah, somewhere you could never take me]
It was a surprise when the Nobel prize in physics was announced.
It was awarded for advances in machine learning employing methods from physics.
At the end of the day the Nobel Foundation can recognize anyone they feel recognizing, there is no universal definition for greatest benefit to humankind.
Therefore, I will not even try to argue whether it should have been awarded or not.
It is a singular point, and like most singularities you can not study it directly, instead you study its neighborhood.
From my real-time survey of my social media outlets, the community was mostly divided into three: one group was what the actual f..k? the other CS will steer and power the development of science and the final one: the silent one.
One professor that I follow on the dying blue bird at the crucifix who creates top-notch code basically said physics was done.
I'm old enough to have read my fair share to know those kind of claims are plain wrong.
Nevertheless, even when there are tons of history texts on human development, and even the history of CS has seen false promises (dot-com crash anyone?), that seems to fail to disuade people who should be beacons of restrain and measure to plainly state that science will only advance thanks to us, the computer scientists.
I understand this prize and the one in chemistry, yes, another one going to machine learning, give some sense of recognition to the field.
Computers are ubiquitous and yet, despite this omnipresence, just a handful of people know what The Turing Award is.
Mainstream media in the first decade of the present century loved to state the new rich were geeks, a derogatory term.
But now, awarding the most visible prize to computer people in two non-computational disciplines creates the feeling of computer science not only standing on its own, but being at the front row.
Of course the Nobel prize was not awarded to, say, sensor networks for using physics of battery fatigue to influence communication and processing algorithms to enhance the life of a sensor network and in general to provide strategies for creating more efficient techniques of computer use for a greener future (I may have exaggerated a bit but you get my drift).
No, it was awarded for foundational discoveries and inventions that enable machine learning with artificial neural networks right at the peek of its hype; and just as a side note, the Scientific Background to the Nobel Prize never defines what machine learning is, but it does claim that it is revolutionizing science .
Of course, practitioners can not just make their beds and go to sleep.
Now it is the time to Stand and Deliver (1988), to pick up the relay and run at full speed.
The revolution will not happen on its own.
But, what revolution? The generative revolution?
The Nobel committee was careful in their selection of case-studies.
No hype at all, no wild-prediction, no Nobel generation, just standard function approximation of (very) specific problems that require massive analysis of experimental data.
There is no doubt that ANNs are excellent function approximators and they will continue to play an important role in the development of science and technology.
But no matter how good your function approximator is, it still needs a function to approximate to start with!, there is where innovation happens.
The Nobel committee used to word learning so I will draw a small parallel with actual research on human learning.
In 1988 a new software saw the light of day, its name Cabri-géometrè.
Cabri's goal was to allow people to explore plain geometry.
I was exposed to it and other pedagogical simulation software during the 90s because my parents were researchers in mathematics education.
Thanks to them I even got my hands on a TI-92 with Cabri on it! How cool was that!
I loved geometry so when I took the Modern Geometry course in college I had to solve all the problems in two ways: a) with my ruler, compass and pencil and b) with Cabri.
Of course there is something magical about seeing the nine-point circle come to life on a monitor, you can even zoom-in all you want and see the intersection points.
I also grew-up around TI calculators, and I was always amazed how you could move a graph to the right, top, left, zoom in and zoom out, etc.
However, without knowing what the nine-point circle is and how to construct it, or what function you want to plot, Cabri is just a minimalist graphical user interface and my TI-Nspire is a good looking paperweight.
It has been almost 40 years since the first Cabri version saw the light of day and research papers on learning are still written to study whether its use in the classroom promotes students learning.
It has not been decided if Cabri and the plethora of teaching software have revolutionized learning.
How can we claim machine learning (whatever that means) has been reached when we have no idea about the optimal strategy to reach learning in our own students?
At most, in my view, the case-studies are the equivalent of an intern in charge of some mundane task that no senior actually wants to do.
I don't see people claiming interns are the ones revolutionizing science. Maybe we should start to do so.
I think this prize was irresponsible.
There are a lot of different wordings the committee could have used but they chose to use machine learning, they could have very well phrased it as: ... that enable complex phenomena approximation with artificial neural networks, but no, they had to use a term twisted by big corporations and their minions.
What can happen when you put revolution, machine learning and science together in a context where the revolution of machine learning has been a dominant big-tech speech?
You automatically validate the speech, because you are showing that it works for one of the most complex creations of mankind, science; with no clarification whatsoever.
If science is being revolutionized, then your mundane text writing task or whatever can be revolutionized too by means of this fantastic machine learning model; but to serve you all, we need more power, we need more resources for this world-wide revolution.
You don't need to be a genius to follow this simple plain reasoning.
One thing is sure, from the moment the prize was awarded to The Godfather his introduction should simplify, from saying Turing prize, the most prestigious award in CS, the equivalent of the Nobel prize to a simpler Nobel in physics .
In Mexico corn and chili are cuisine's cornerstones.
With a plethora of variations, both ingredients are cherished and precious.
Any elementary school student knows that if you want to buy half of a kilogram of corn made tortillas, you have to pay half of the price of a kilogram.
Likewise, if you want to buy half of a quarter of kilogram of chili chipotle, you have to pay half the price of a quarter of kilogram.
In general, they are aware of the fact that the cost of a halve of something is half the cost of the unit.
We don't know exactly when humans made this powerful abstraction, but it may have started with the introduction of natural numbers, things to count things.
Before numbers and arithmetic, probably when three persons wanted to share six goods equally among them, they would assign one good to each in order until there were no more goods left to assign.
And the same process would be done for other six goods and so on.
So, where exactly are we with LLMs?
It could be that we are in an equivalent of a pre-number stage.
Something that gives the correct answer when the question is really close to the exact question in the training set.
When the question shifts away from the question in the training set, then the correctness of the answer also shifts.
We could also be in a stage equivalent to the Babylonian mathematics.
Scratching the surface by solving special cases.
Or, are we in a stage similar to the invention of Calculus?
When invented, Calculus just worked even when the concept of infinity small quantities was not formal, the math worked.
It took close to 200 years to formalize the concept of limit but practically it could be regarded as an extra assurance to something that has been seen to work.
Perhaps we just need that extra assurance about the power of LLMs but they will stay practically unchanged until we do because, they just work.
I think we are somewhere in between the pre-number stage and the Babylonian mathematics.
It is clear that in a variety of (simple) questions LLMs answer plainly wrong.
Some argue the problem lays in how the questions are being formulated, and hence it is an unfair test.
This is the equivalent of an student complaining about a test question designed to measure the student's understanding because the wording did not match a template.
LLMs can handle questions they were designed to handle, like "mix these components and tell me what you can find according to the theoretical models you know".
As impressive as it is, I believe those models are not intelligent, they are just really expensive calculators that can not describe the steps involved in the calculation.
Would we be finally able to achieve explainable AI?
Or perhaps we have to review our hypothesis of what a neuron is and how neurons work.
After all, nature has always enchanted us with its complex simplicity and perhaps a line of dots connected to all the next possible dots is not as elegant as nature intended.