The past, present and uncertain future of the keys to online identity …

[ad_1]

(Bigstock Photo)

Human relationships rely on trust, which is why the true history of authentication extends back long before the first written documents referencing it. I suspect that as early as humans formed tribes, they found ways to “authenticate” each other at night using specific sounds or watchwords. At the very least, Roman soldiers used watchwords regularly during the late B.C. and early A.D. periods.

Over time authentication has become more complex. When you actually know someone, it’s easy to authenticate them by simply looking at them. However, as society has become more complicated, humans have had to develop new ways to authenticate people they don’t directly know.

For example, a general in the Roman army may have led thousands of soldiers but couldn’t personally know them all. If one of those soldiers were to walk up on the street and claim to be part of his army, he’d need some way to prove it. Besides watchwords, society has developed many authentication factors, such as personal identification numbers (e.g. Social Security numbers), passports, and other types of ID. These analog forms of identification allow even strangers to authenticate one another.

That brings us to the digital age, in which problems with traditional authentication methods have been amplified. Online, you can interact with countless strangers around the world without knowing or seeing them. Thus, figuring out if they really are who the claim to be has become even more important. Let’s walk through a brief history of digital authentication in order to learn what works and what doesn’t, and how authentication needs to evolve to support our online future.

(GeekWire Photo / Frank Catalano)

The 60s: It all started with the password

Passwords have existed for a long time in the analog world, but there is one common story many cite for the first digital or computer password. Back in the 1960s, computers were huge, exorbitantly expensive, and relatively slow devices (compared to today’s standards) that were out of reach for most people. You typically found single, mainframe-like computers at a few universities and the largest enterprises. Of course, when there’s only one expensive computer available to a large group, you end up with a lot of demand. That’s why universities like MIT developed time-sharing operating systems such as the Compatible Time-Sharing System (CTSS) to allow many users to share the resources provided by a single computer.

At the time, users accessed these centralized computers through dumb (or sometimes smart) terminals. Often many users would share one of these terminals to send their tasks to the mainframe, which of course led to the issue of shared file systems. How does one user keep his or her files private from another user? Fernando Corbató, an MIT researcher, later professor, and one of the pioneers of CTSS, solved that problem in 1961 by using passwords to protect user files on this multi-user time-sharing system.

Ironically, another Ph.D. researcher, Allan Scherr, also demonstrated one of the weaknesses with password systems by hacking the CTSS a short year later. Any password-based authentication system needs to store passwords somewhere in order to validate them against a user’s input. Server-based systems tend to store all the user passwords in one file. With the goal of increasing his personal CTSS usage time, Scherr basically found the location of the file and used some of his time to print that file, gaining access to all user passwords.

Being a smart hacker, Scherr also wanted to cover his tracks. In 1966, an apparent “bug” popped up in the CTSS system, which showed the full password file to all users who logged into the system. Years later, Scherr fessed up to stealing the original master password file and admitted the “bug” was his way to throw off suspicion by putting the passwords into everyone’s hands.

In any case, passwords were the first method we used to authenticate to computers more than 50 years ago, but they immediately showed the industry some problems they’d need to solve to make them more secure.

(Pixabay / Achinverma Image)

The 70s: Protected passwords with salted hash

Though cryptography and authentication are two different things, they share similar technologies and have a symbiotic relationship. Another key period for authentication was the 1970s, when Bell Labs researcher Robert Morris created a way to secure the master password file for the Unix operating system. As learned from the CTSS password leak, it’s a bad idea to store passwords in a clear text file. Morris used a cryptographic concept called a hash function to store passwords in a way so that computers could still verify them without literally having to store the actual passwords themselves.

This original idea made its way to most other operating systems and continued to evolve with additional protections. For example, as attackers found ways to crack or “brute-force” hash algorithms, the industry developed stronger hash functions and added new elements of randomness (called salts) to make hashes more unique to individual systems. In short, Morris’ development of hash-based password storage systems in the 70s went a long way to making authentication systems more secure than before.

As an unrelated but ironic aside, Morris is also the person responsible for creating the first ever computer worm, in 1988.

(Pixabay / Weinstock Photo)

Mid-70s: Public-key cryptography surfaces

In addition to hashes, there are other cryptographic technologies that are useful for authentication. One such technology is public-key or asymmetric cryptography. Without going into the math behind how it works, this technology involves two keys: a public key that you can safely share with the world to help identify yourself, and a private key you use to sign things, thus verifying your identity. A digital certification—also known as an identity certificate—is essentially your public key certificate signed with your private key to verify it belongs to you. These certificates can be used as factors of authentication, to validate identity online.

Asymmetric cryptography and public/private keys were discovered first in the early 70s, starting with classified government research from the Government Communications Headquarters (GCHQ). While those discoveries weren’t declassified until the 90s, public researchers found their own ways to use asymmetric key technology in the mid-1970s, eventually resulting in three famous researchers, Ron RivestAdi Shamir and Leonard Adleman, creating the popular RSA asymmetric key algorithm. Digital certificates and signatures have become an important factor—specifically, something you have—in the authentication world.

(Pixabay / Geralt Image)

The 80s: One-time passwords emerge

As more digital systems leveraged passwords for security, more researchers and hackers found ways to abuse them as well. This led to the industry constantly looking for increasingly secure methods of authentication. One of the biggest risks with a normal, persistent password system is that if an attacker can guess, steal, or intercept your password, they could replay it. To combat this, what if a user had an entirely different password every time that user logged in? This concept is called one-time passwords (OTP).

There are two main challenges with OTP systems:

  1. How do you algorithmically create new, non-predictable passwords in a way that the central system can still validate them?
  2. How do you actually get those new passwords to the user?

In 1984, Security Dynamics Technologies, Inc. patented a methodology for doing just that using a special hardware device and a time-based method. As the saying goes, though, there are many ways to skin a cat. Over time, the industry developed many OTP standards using different techniques.

Some are time-based (TOTP), some use a challenge and response (e.g. OCRA), others base the next password (hash) on the previous one (HOTP or event-based) and so forth. On the user side of the equation, early OTP methods often required a user have special hardware to help them get the next password, but other systems could also send you the new password through other channels (text, the web, etc.).

From the 1980s on, we saw many different OTP standards develop like S/Key and OTPW which eventually led to OAuth, a standard to standardize types of OTP. Keep in mind, this period saw the emergence of two- or multi-factor authentication (2FA and MFA) as well, as OTP was commonly used in addition to traditional passwords. We’ll come back to MFA again shortly.

(BigStock Image)

The 90s: Public-key infrastructure

During the 70s, public-key cryptography was important to authentication. When used properly, a signed digital certificate is a great factor of authentication. However, it has one problem—how can I ensure that the public key I have for someone really was created by that specific person? It’s completely possible for a threat actor to create a new public/private key pair and publish the public key as though it belongs to someone else. To solve this problem, you need some sort of trusted third party that creates these for us, thus validating their legitimacy.

Enter public key infrastructure (PKI), a set of technologies and standards that manage the creation, storage, and distribution of keys or digital certificates. This infrastructure is often bound to certificate authorities, which help validate that certain keys or certificates really belong to one entity or another. While researchers likely established their own PKI technologies as early as public-key cryptography came into being, PKI didn’t really surface commercially until around the 1990s. One technology that helped was X.509 certificates, a digital certificate standard which came out in the late 80s to early 90s.

Smart cards are one solid example of PKI and X.509 certificates being used for authentication. These physical cards contained a digital chip that held your signed public key and were managed/maintained by PKI. As an aside, they also contributed to the growth of MFA. Some PKI systems have you establish a personal PIN when creating your key pair. In this case, a smart card actually uses two factors of authentication: your digital certificate on the card, and the PIN you use to unlock it. More on MFA shortly.

Mid-90s detour: CAPTCHAs

While not core to authentication, let’s take a quick diversion to talk about a technology that helps secure some digital authentication solutions. In the late 90s, various researchers invented techniques to tell humans apart from computers. They called these techniques CAPTCHAs, which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart.” A common example you’ve surely encountered is a very distorted image of letters. The picture of letters is distorted enough that computer automated image recognition algorithms might have trouble identifying the individual letters, but humans can pretty easily figure it out.

You can’t use CAPTCHAs to authenticate people, but you can used them to prevent some automated authentication attacks. A brute-force password attack is one that simply uses the speed of computers to try every iteration of a password on a login page. A simple way to prevent these attacks is to require a CAPTCHA solution with each authentication attempt. That way you can limit password retries to human level of speed, greatly reducing the efficacy of brute-force attacks.

(Pixabay / AR130405 Image)

The 2000s: MFA adoption takes hold

Now it’s time for MFA, which is continually evolving but really became popularized in the 2000s. In examining the previous decades, you’ve probably realized there are different factors you can use to authenticate. The industry categorizes the factors into three buckets: something you know, have or are. Something you “know” would be things like your favorite color, a password, a PIN, or a one-time password. Something you “have” includes things like special hardware, a digital certificate, a smart card or device containing a digital certificate. And something you “are” is your face, iris, fingerprint, or heartbeat.

As technologies used to digitally authenticate people over the decades have advanced, so too have the techniques attackers find to trick or bypass digital authentication. The truth is this digital arms race will likely never end. We will design better ways to identify one another online, but I suspect attackers will also leverage new technologies to find better ways to trick our identification processes. This is precisely why MFA was born.

MFA is just the act of combining two or more authentication factors together to more strongly identify the person you are communicating with. And 2FA is just the bare minimum MFA option, using just two elements. MFA solutions can technically use as many factors as you have or want. Examples might include a password and an OTP (two things you know), or a PIN and smart card, or a password and fingerprint. It doesn’t matter what the two or more factors are, you just need to use more than one. The benefit is that even if an attacker figures out your password, they’d still need a way to hijack or steal your other factor too.

The truth is, very secure industries or organizations, like government facilities, banks and the largest enterprises, used MFA long before the 2000s. For example, the early OTP products often used special hardware, and combined OTPs with a normal password. As mentioned earlier, smart cards combined PINs and digital certificates. However, it wasn’t really until the 2000s that the general public started using MFA regularly, even if they didn’t know it. Essentially, bank or debit cards are a type of 2FA. You have a card with a chip that includes a certificate assigned to you, with a PIN that only you should know. Bank cards were likely the first 2FA method most normal people ever used.

In a nutshell, by the 2000s most of the general public was using 2FA. Meanwhile, big enterprises with strong security needs were also trying to add MFA authentication to other important information services. Nevertheless, during this time MFA was still not a part of consumer computing, or even present in small to mid-sized businesses. That comes next.

(BigStock Photo / David Tran)

The 2010s: The smartphone era

In the 2010s, passwords are still by far the most common digital authentication factor. However, they are also showing their age and weakness.

When used properly, following very diligent security practices, passwords are a decent authentication factor. The problem is that most humans don’t follow the arduous best practices, while many organizations that manage passwords don’t follow good practices themselves. The result of this password mishandling was countless password database leaks over the past few decades which have proven that passwords alone are insufficient to protect our online identities.

MFA could help solve this problem, but until this point the technologies and alternatives for authentication were forbiddingly expensive or complex.

That’s where smartphones came in and enabled the future of authentication. In the 2010s, the ubiquitous nature of smartphones brought two previously less-attainable authentication technologies to the masses: biometrics and 2FA.

  • Biometrics gets cheaper. We haven’t talked much about biometrics, but this technology started long before 2010. Biometrics are the “something you are” in authentication. Essentially, biometrics consists of various technologies that can scan and measure something about your body, which in many cases can be used to identify you. While proprietary and standalone technologies existed for biometric authentication long before the 2010s, it wasn’t until just a few years ago that smartphone companies started adding biometric scanners to their devices, making ubiquitous biometric identification a reality. For instance, TouchID came out from Apple in 2014, followed by FaceID in 2017. Android devices also had fingerprint and facial scanners before that. During this time, we even see OS vendors like Microsoft add Hello to Windows to more easily take advantage of biometric identification during log in.
  • The consumerization of 2FA. Before the 2010s, MFA and 2FA solutions were pretty proprietary, expensive, and complex systems that many small and mid-sized businesses couldn’t afford to manage. At the very least, many of the solutions required the acquisition and distribution of hardware authentication factors that were just too expensive and taxing for some businesses to deal with, let alone the average consumer. However, smartphones began to change that as well. First, almost everyone already has a smartphone. Rather than buying specialized hardware, why not use the smartphone to somehow distribute a second factor of authentication? Many OTP systems leveraged smartphones to quickly distribute a second one-time password to users via SMS or text messages. Now you could easily combine a single password and a second OTP without the need for additional hardware.

In short, the popularity of smartphones in the 2010s and some of the new technologies they acquired greatly increased the general public’s ability to leverage stronger authentication methods.

(BigStock Photo)

Today: Strong MFA for everyone

If there’s one thing you should learn from all this, it’s that all authentication technology continues to evolve in order to get better at uniquely and reliably identifying users, while withstanding the new attack methods from criminals. While I’ve forced certain technologies into eras, each one –  from the initial way we stored passwords, to the cryptographic standards that support authentication, to the methods we use for biometric scans – has constantly evolved to become more effective and more widely accessible.

In my opinion, the late 2010s and the 2020s should be the era of MFA for everyone due to major decreases in complexity and cost. Smartphones have been the key to MFA adoption among consumers.

 The late 2010s and the 2020s should be the era of MFA for everyone due to major decreases in complexity and cost.

That said, early authentication technologies are often later found to have weaknesses. The most recent problems with smartphone MFA have to do with SMS-based options. Text messages are sent in the clear, and attackers can intercept them whether through some man-in-the-middle attack, with phishing or smartphone malware. Over the past few years, we’ve seen multiple cases of attackers bypassing authentication systems despite their use of SMS-based 2FA. This means we need stronger smartphone 2FA solutions to survive.

As always, these technologies evolve to meet our latest requirements. Today, many smartphone MFA solutions have moved to “push” authentication technology.  Rather than using the clear text SMS communication medium, these solutions leverage the encrypted and secure communications that smartphone vendors use for notifications. The latest smartphone authentication solutions often also use unique aspects of each smartphone’s individual hardware, adding yet another factor to the authentication equation.

Finally, these solutions even incorporate the biometric options for the phone when available. With these type of smartphone options sitting in our back pockets, every consumer can authenticate using a password, a unique token on your phone, the actual device’s hardware, and even one of your biometrics, to strongly authenticate.

If you have a smartphone and aren’t using it to add MFA to every aspect of your digital life already, take note of the lessons above and think hard about implementing it soon as a proven method for securing your personal and professional data.

[ad_2]

Source link Google News