Guest post by my dad, K.S.Loganathan
Introduction
Algorithms are a finite sequence of steps to solve a problem and achieve a predictable outcome. Many algorithms are used in everyday life – cooking recipes, writing essays, sorting documents, recognizing faces, driving a car, or ordering what to eat at a restaurant. In computers, numerous tasks, such as calculations, spellchecking, or an online search, are performed with algorithms. The sequence of steps is converted to the code using any programming language of your choice. The number of lines of code is a software metric used to measure the size of a computer program. A million lines of code, if printed, will occupy 18000 pages of text, and modern operating systems include millions of such lines of code.
Modern computers have advanced from being calculating behemoths to tackling real-world tasks, dealing with chance and probability functions, trading off time with accuracy and using approximations. Data usage has become immense (big data) both in the scale of sampling and in the sample parameters leading sometimes to complex algorithms used in supercomputers. The machine’s ability to keep improving its performance without human help (machine learning) has had a powerful impact on Artificial Intelligence- powered technologies. Machine learning systems are replacing older algorithms in many applications with superior-to-human-level performance. Deep learning algorithms can make better use of large data sets than the old algorithms, although they may require more processing power. As these skills spread quickly, sufficient data for general performance improvements may become easier to obtain and be used to complement human skills.
Person-to-person exchanges have been replaced with communications technology advances where computers have become not only the conduit but also the endpoints, the ones doing the talking. These machine-to-machine problems and their algorithms, at once mimic and highlight person-to-person challenges. In a recent interview, actor Nandita Das said that the struggle between man and machine that Charlie Chaplin depicted in ‘Modern Times’ has now shifted to one between man and algorithms.
The use of algorithms and artificial intelligence presents many challenges and insights into both the power and limitations of machine learning methods is vital. A.I. algorithms embodied in machines are used either to carry out tasks that would normally require human involvement or to provide expert-level advice to humans. Even as search-engine research has advanced, it has created new opportunities for discovering parallels between minds and machines.
Algorithms may range from simple to complex and vary in precision from random(coincidence) or intuitive to complex models approaching clairvoyance or prescience. Time-honored algorithms like pro and con and to-do lists are applicable to diverse situations like deciding on matrimony, forecasting share prices, report writing and scheduling. Algorithm design draws not only on computer science, mathematics, and engineering, but also kindred fields, like statistics and operations research. Algorithms can be a means to decide how to use time, space, and effort more efficiently in our daily lives, especially when there are no real benchmarks by which to judge our performance.
Algorithms to live by
‘Algorithms to live by’ authored by Brian Christian and Tom Griffiths explores human algorithm design – the search for better solutions to the challenges people face daily, through a process-driven approach. Brian Christian is a science writer and Tom Griffiths is a professor of psychology. They combine computational models with human psychology to offer practical advice on a variety of everyday problems. These range from getting the best apartment, appointing a secretary, organizing your closet, or understanding your memory capacity. The book outlines in simple language, without mathematical or coding symbols, some simple strategies, which although not perfect, come with a statistical probability advantage. Some of the algorithms explained in the book are summarized here.
A number of algorithms are based on queue management and latency and determining the right time intervals in switching from exploring to exploiting, etc., Time is of the essence, and the interval makes the strategy. An interesting algorithm is the ‘optimal stopping problem,’ where to maximize the chances of the best outcome, we ought to ditch the first 37% of any options available in either the sequence of samples or in the total time available for decision making. The 37% rule is a calibration period during which you assess what works and what does not, and synthesize the information to choose something better than them at the first opportunity. It means not committing too quickly and the principle can be extended to other problems, such as when to sell an asset or to ease into the next available parking slot in a car parking yard. It also helps in the choice between new and known best restaurants, selecting keywords in advertisements and determining when to quit gambling or a life of crime.
Taking the future into account, rather than focusing just on the present, is what drives innovation. The urge to find what’s best often trumps what’s new, but the choice between them can be arrived at by the explore/exploit option. The trade-off is a fine balance between the gathering of new knowledge, and the cashing in of a known good result acquired by experience. Applications of such algorithms as the Gittins index can range from new product development to choosing eateries, assessing drug efficiency, etc.,
Order and method algorithms can help us clear messy offices, wardrobes and bookshelves (referred to as sorting, caching, and storage algorithms in the book). It would seem that the best way to get organized with your clothes and other articles in daily use would be to do ‘bucket sorting,’ assigning items into three piles, arranged by frequency or time of use. For example, the in-demand items could be stored nearby and the least recently used items discarded or placed remotely. Cache eviction is essential in both computer and human memory and information retrieval must be sequenced properly.
Most scheduling problems admit to no ready algorithm solutions. A to-do list is ordinarily good enough. In a sufficiently complex situation, random choices may fare better than an analysis of all the chains of probabilities and overfitting data.
Performance metrics for onboarding employees and annual incentive assessments can be misleading and persuade employees to focus on the wrong things. No universal practice to maximize human productivity or athletic skills has yet been established. A computer network manages its transmission capacity and fluctuating demand by doubling the work of the performing connections and halving that of the laggards to preserve flexibility rather than hierarchy, something that cannot be readily applied to human skill assessment.
The authors make the case for using the best algorithms, which make the most sense in the least amount of time.
The problem with algorithms
All life decisions involve taking a view of the future, whether it is based on machine algorithms or rhythms based on planetary motions. An ‘always-right’ algorithm does not exist for all situations. Algorithms only model associations and do not imply underlying causal processes. While algorithms based on probability modeling, game theory and technology forecasting may be adequate for less complex situations, they are not suited to situations where there is no clear pattern of cause-and-effect relationships.
That said, we must be aware of bias embedded in opaque algorithms, which carry the bias of the designers. Bias can flow not only by design but also from datasets, which may be reflective of a biased mindset evident in their gathering. The tendency to use machine algorithms to track and solve social challenges like political activism and customer attraction strategies is increasing. Fake news and data privacy concerns have increased in recent years. At the same time, quantum computing is vastly expanding the capacity to deal with big, messy datasets at exponentially high speeds for far greater than conventional computers’ binary bits. The unrivaled speed and algorithmic power that it represents, can help expedite drug delivery, analyze credit risks, and climate change in a relevant and cost-effective manner.
The human algorithm designs will only grow in importance, and we must choose algorithms wisely in adopting them for making life-affirming choices.