What is cloud computing, in simple terms?
Cloud computing is the delivery of on-demand computing services — from applications to storage and processing power — typically over the internet and on a pay-as-you-go basis.
How does cloud computing work?
Rather than owning their own computing infrastructure or data centers, companies can rent access to anything from applications to storage from a cloud service provider.
One benefit of using cloud computing services is that firms can avoid the upfront cost and complexity of owning and maintaining their own IT infrastructure, and instead simply pay for what they use, when they use it.
In turn, providers of cloud computing services can benefit from significant economies of scale by delivering the same services to a wide range of customers.
What Is AWS Cloud ?
Amazon Web Services (AWS) is a subsidiary of Amazon providing on demand cloud computing platform and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. These cloud computing web services provide a variety of basic abstract technical infrastructure and distributed computing building blocks and tools. One of these services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, available all the time, through the Internet. AWS’s version of virtual computers emulates most of the attributes of a real computer, including hardware central processing units (CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk/SSD storage; a choice of operating systems; networking; and pre-loaded application software such as web servers, databases, and customer relationship management (CRM).
The AWS technology is implemented at server farms throughout the world, and maintained by the Amazon subsidiary. Fees are based on a combination of usage (known as a “Pay-as-you-go” model), hardware, operating system, software, or networking features chosen by the subscriber required availability, redundancy, security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated physical computer, or clusters of either. As part of the subscription agreement, Amazon provides security for subscribers’ systems. AWS operates from many global geographical regions including 6 in North America.
Amazon markets AWS to subscribers as a way of obtaining large scale computing capacity more quickly and cheaply than building an actual physical server farm. All services are billed based on usage, but each service measures usage in varying ways. As of 2017, AWS owns a dominant 34% of all cloud (IaaS, PaaS) while the next three competitors Microsoft, Google, and IBM have 11%, 8%, 6% respectively according to Synergy Group.
History of AWS Cloud.
The AWS platform was launched in July 2002. In its early stages, the platform consisted of only a few disparate tools and services. Then in late 2003, the AWS concept was publicly reformulated when Chris Pinkham and Benjamin Black presented a paper describing a vision for Amazon’s retail computing infrastructure that was completely standardized, completely automated, and would rely extensively on web services for services such as storage and would draw on internal work already underway. Near the end of their paper, they mentioned the possibility of selling access to virtual servers as a service, proposing the company could generate revenue from the new infrastructure investment. In November 2004, the first AWS service launched for public usage: Simple Queue Service (SQS). Thereafter Pinkham and lead developer Christopher Brown developed the Amazon EC2 service, with a team in Cape Town, South Africa.
Amazon Web Services was officially re-launched on March 14, 2006, combining the three initial service offerings of Amazon S3 cloud storage, SQS, and EC2. The AWS platform finally provided an integrated suite of core online services, as Chris Pinkham and Benjamin Black had proposed back in 2003, as a service offered to other developers, web sites, client-side applications, and companies. Andy Jassy, AWS founder and vice president in 2006, said at the time that Amazon S3 (one of the first and most scalable elements of AWS) “helps free developers from worrying about where they are going to store data, whether it will be safe and secure, if it will be available when they need it, the costs associated with server maintenance, or whether they have enough storage available. Amazon S3 enables developers to focus on innovating with data, rather than figuring out how to store it.”. In 2016 Jassy was promoted to CEO of the division. Reflecting the success of AWS, his annual compensation in 2017 hit nearly $36 million.
In 2014, AWS launched its partner network entitled APN (AWS Partner Network) which is focused on helping AWS-based companies grow and scale the success of their business with close collaboration and best practices.
To support industry-wide training and skills standardization, AWS began offering a certification program for computer engineers, on April 30, 2013, to highlight expertise in cloud computing.
In January 2015, Amazon Web Services acquired Annapurna Labs, an Israel-based microelectronics company reputedly for US$350–370M.
James Hamilton, an AWS engineer, wrote a retrospective article in 2016 to highlight the ten-year history of the online service from 2006 to 2016. As an early fan and outspoken proponent of the technology, he had joined the AWS engineering team in 2008.
In January 2018, Amazon launched an autoscaling service on AWS.
In November 2018, AWS announced customized ARM cores for use in its servers. Also in November 2018, AWS is developing ground stations to communicate with customer’s satellites.
Growth and profitability
In November 2010, it was reported that all of Amazon.com’s retail sites had migrated to AWS. Prior to 2012, AWS was considered a part of Amazon.com and so its revenue was not delineated in Amazon financial statements. In that year industry watchers for the first time estimated AWS revenue to be over $1.5 billion.
In April 2015, Amazon.com reported AWS was profitable, with sales of $1.57 billion in the first quarter of the year and $265 million of operating income. Founder Jeff Bezos described it as a fast-growing $5 billion business; analysts described it as “surprisingly more profitable than forecast”. In October, Amazon.com said in its Q3 earnings report that AWS’s operating income was $521 million, with operating margins at 25 percent. AWS’s 2015 Q3 revenue was $2.1 billion, a 78% increase from 2014’s Q3 revenue of $1.17 billion. 2015 Q4 revenue for the AWS segment increased 69.5% y/y to $2.4 billion with 28.5% operating margin, giving AWS a $9.6 billion run rate. In 2015, Gartner estimated that AWS customers are deploying 10x more infrastructure on AWS than the combined adoption of the next 14 providers.
In 2016 Q1, revenue was $2.57 billion with net income of $604 million, a 64% increase over 2015 Q1 that resulted in AWS being more profitable than Amazon’s North American retail business for the first time. In the first quarter of 2016, Amazon experienced a 42% rise in stock value as a result of increased earnings, of which AWS contributed 56% to corporate profits.
AWS had $17.46 billion in annual revenue in 2017. By end of 2018, the number had grown to $25.65 billion.
In 2019, AWS reported 37% yearly growth and accounted for 12% of Amazon’s revenue (up from 11% in 2018).
Services of AWS Cloud.
As of 2020, AWS comprises more than 175 products and services including computing, storage, networking, database, analytics, application services, deployment, management, mobile, developer tools, and tools for the Internet of Things. The most popular include Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (Amazon S3).
Most services are not exposed directly to end users, but instead offer functionality through APIs for developers to use in their applications. Amazon Web Services’ offerings are accessed over HTTP, using the REST architectural style and SOAP protocol for older APIs and exclusively JSON for newer ones.
Which Company use AWS Cloud ?
According to Amazon, the number of active AWS users exceeds 1,000,000.
While small and mid-size companies make up the majority of that user base, recent polls by private consulting firms suggest that enterprise-scale users make up at least 10% of that total.
AWS Customers: The Ultimate List
Whether it’s technology giants, television networks, banks, food manufacturers or governments, many different organisations are using AWS to develop, deploy and host applications.
Here are the names that are on record publicly as using AWS:
Aon, Adobe, Airbnb, Alcatel-Lucent, AOL, Acquia, AdRoll, AEG, Alert Logic, Autodesk, Bitdefender, BMW, British Gas, Baidu, Bristol-Myers Squibb, Canon, Capital One, Channel 4, Chef, Citrix, Coinbase, Comcast, Coursera, Disney, Docker, Dow Jones, European Space Agency, ESPN, Expedia, Financial Times, FINRA, General Electric, GoSquared, Guardian News & Media, Harvard Medical School, Hearst Corporation, Hitachi, HTC, IMDb, International Centre for Radio Astronomy Research, International Civil Aviation Organization, ITV, iZettle, Johnson & Johnson, JustGiving, JWT, Kaplan, Kellogg’s, Lamborghini, Lonely Planet, Lyft, Made.com, McDonalds, NASA, NASDAQ OMX, National Rail Enquiries, National Trust, Netflix, News International, News UK, Nokia, Nordstrom, Novartis, Pfizer, Philips, Pinterest, Quantas, Reddit, Sage, Samsung, SAP, Schneider Electric, Scribd, Securitas Direct, Siemens, Slack, Sony, SoundCloud, Spotify, Square Enix, Tata Motors, The Weather Company, Twitch, Turner Broadcasting,Ticketmaster, Time Inc., Trainline, Ubisoft, UCAS, Unilever, US Department of State, USDA Food and Nutrition Service, UK Ministry of Justice, Vodafone Italy, WeTransfer, WIX, Xiaomi, Yelp, Zynga and Zillow.
The Big Spenders
Clearly, AWS is the cloud computing platform of choice for businesses across a range of industries. But who are the biggest, and how much money are they spending on these services?
According to Intricately, the top ten AWS users based on EC2 monthly spend are:
- Netflix: $19 million
- Twitch: $15 million
- LinkedIn: $13 million
- Facebook: $11 million
- Turner Broadcasting: $10 million
- BBC: $9 million
- Baidu: $9 million
- ESPN: $8 million
- Adobe: $8 million
- Twitter: $7 million
Case study of some company that uses AWS Cloud
Netflix has launched a worldwide test of “Shuffle Play,” a feature for indecisive subscribers that starts streaming a random title based on their viewing history or playlists.
The idea, of course, is to provide yet another way to encourage Netflix users to watch more content on the service — even if they don’t know what, exactly, they want to watch — and potentially get hooked on a new TV show or movie. Currently, “Shuffle Play” is being tested only on connected-TV devices.
“The purpose of the test is to make it easier for members to find something to watch,” a Netflix spokesperson said. The company plans to use the findings from the test to eventually roll out a “shuffle” feature permanently on Netflix: “The hope is to absolutely productize something,” the rep said.
The titles served up by Netflix when you hit “Shuffle Play” will be either shows or movies similar to ones you’ve watched previously; titles in genres you’ve viewed; or content you have saved in the “My List” section.
How Netflix use AWS Cloud inside Its Company .
Netflix is the world’s leading internet television network, with more than 100 million members worldwide enjoying 125 million hours of TV shows and movies each day, including original series, documentaries, and feature films. Members can watch as much as they want, anytime, anywhere, on nearly any Internet-connected screen.
Application Monitoring on a Massive Scale
Netflix uses Amazon Web Services (AWS) for nearly all its computing and storage needs, including databases, analytics, recommendation engines, video transcoding, and more — hundreds of functions that in total use more than 100,000 server instances on AWS.
This results in an extremely complex and dynamic networking environment where applications are constantly communicating inside AWS and across the Internet. Monitoring and optimizing its network is critical for Netflix to continue improving customer experience, increasing efficiency, and reducing costs. In particular, Netflix needed a solution for ingesting, augmenting, and analyzing the multiple terabytes of data its network generates daily in the form of virtual private cloud (VPC) flow logs. This would enable Netflix to identify performance-improvement opportunities, such as identifying apps that are communicating across regions and collocating them. The company would also be able to increase uptime by quickly detecting and mitigating application downtime.
Each log record carries information about the communications between two IP addresses. However, in a dynamic environment like the one at Netflix, where an IP address can float between applications from day to day or even minute to minute, IP addresses alone don’t have much meaning. “The data sources we had before we took on this initiative were one sided,” says John Bennett, senior software engineer at Netflix. “We’d know an application was connecting to others, but we didn’t know both sides of the conversation and how to optimize those communications or the placement of the applications on the network.”
Netflix set out to establish a new data source that could give it more insight into communication among applications and regions by combining VPC flow logs with application metadata.
Netflix Centralizing Flow Logs Using Amazon Kinesis Data Streams
From the outset, AWS enabled Netflix to experiment with different approaches to analyzing its network data. “Early in the design process, the flexibility to try different ways of processing the data was important,” says Bennett. “We experimented with multiple designs and used many AWS products to get here.”
The solution Netflix ultimately deployed — known internally as Dredge — centralizes flow logs using Amazon data Stream. The application reads the data from Amazon Kinesis Data Streams in real time and enriches IP addresses with application metadata to provide a full picture of the networking environment. “Usually, we would put the data into a database, which would build an index to enable faster querying,” says Bennett. “Dredge joins the flow logs with application metadata as it streams and indexes it without using a database, which eliminates a lot of the complexity.”
The enriched data lands in an open-source analytics application called Druid. Netflix uses the OLAP querying functionality of Druid to quickly slice data into regions, availability zones, and time windows to visualize it and gain insight into how the network is behaving and performing.
The scalability of Amazon Kinesis Data Streams was a good fit for the Dredge application because of the cyclical and elastic nature of network usage at Netflix. “When it comes to our networking data, it’s more cost efficient to be able to scale up and down, which is not as easy to do with alternatives to Amazon Kinesis Data Streams,” says Bennett.
Improving Customer Experience with Real-Time Network Monitoring
Netflix’s Amazon Kinesis Data Streams-based solution has proven to be highly scalable, each day processing billions of traffic flows. Typically, about 1,000 Amazon Kinesis shards work in parallel to process the data stream. “Amazon Kinesis Data Streams processes multiple terabytes of log data each day, yet events show up in our analytics in seconds,” says Bennett. “We can discover and respond to issues in real time, ensuring high availability and a great customer experience.”
Netflix is now able to identify new ways to optimize its applications, whether that means moving an application from one region to another or changing to a more appropriate network protocol for a specific type of traffic. “Our solution built on Amazon Kinesis enables us to identify ways to increase efficiency, reduce costs, and improve resiliency for the best customer experience,” says Bennett.
Although a streaming data solution is not new to the IT industry, it is an innovation in the networking space. “Netflix is heavily invested in AWS in part because it abstracts the underlying network, so we don’t have to deal with switches and routers,” says Bennett. “We’re monitoring, analyzing, and optimizing at a higher level of the stack — in ways we would never even consider if we were running our own data centers.”
Discovery Communication(Nasdaq: DISCA, DISCB, DISCK) is a leader in nonfiction media, reaching more than 1.8 billion cumulative subscribers in 218 countries and territories. Discovery is dedicated to satisfying curiosity through 155 worldwide television networks, led by Discovery Channel, TLC, Animal Planet, Science and Investigation Discovery, as well as US joint venture networks OWN: Oprah Winfrey Network, The Hub and 3net, the first 24-hour 3D network. Discovery also is a leading provider of educational products and services to schools and owns and operates a diversified portfolio of digital media services, including Revision3.
Discovery needed to upgrade its website infrastructure, but wanted to avoid a costly upfront one-time expense for updating their hardware. Upgrading would have taken considerable time to accomplish for a three-person team from Discovery Communications, between acquiring the hardware, configuring it, and migrating data to the new system. Discovery also had multiple delivery engines powering their websites, and wanted to consolidate to make their infrastructure easier to manage. Furthermore, the company needed a solution that would allow them the flexibility to pay for only what they used, and the ability to scale quickly to meet demand.
Why Amazon Web Services
Discovery assessed multiple cloud solutions, but none offered the flexibility and pricing of Amazon Web Services (AWS). “AWS was the most mature offering available,” says Igor Brezac, Chief Systems Architect, Digital Media. “The pricing was excellent. We were also attracted by the ability to get new instances up and running at a moment’s notice.” Discovery is now running all of its services on AWS for its US-based digital properties.
Discovery Communications is running about 150 instances of Amazon Elastic Compute Cloud (Amazon EC2), all of which use Amazon Elastic Block Service (Amazon EBS) storage. Discovery uses Amazon Machine Images (AMI) that are built with a custom version of Ubuntu. Elastic Load Balancing (Amazon ELB) handles load balancing both externally and internally for Discovery, inside the Amazon Virtual Private Cloud (Amazon VPC). The company uses Amazon Simple Storage Service (Amazon S3) to store static content and host a few websites. Discovery also uses Amazon Route 53 in combination with Amazon ELB for its domain name service. Discovery’s static assets are delivered globally by Amazon CloudFront’s distributed edge servers. In addition, Discovery also uses Amazon CloudFront’s dynamic content acceleration feature for services like image resizing service and the new Discovery website. “Having a content delivery network (CDN) that delivers both static and dynamic content, including API acceleration, was important to us,” Brezac says.
The use of Multiple Availability Zones (Multi-AZ) has also played a role in Discovery’s success. “This is where Amazon ELB is vital to us,” Brezac continues. “If one Availability Zone is unavailable, Amazon ELB will send the traffic to the other data center. Amazon Multi-AZ is key to the entire deployment. Without that, we wouldn’t have the peace of mind that we do.”
Discovery Communications Architecture on AWS
Discovery began implementing AWS in January 2012, and completed site migration in June 2013. “We migrated more than 40 sites to AWS without missing a beat,” Brezac says. “We now host all our digital media on AWS. Using the AWS Cloud gives us great capacity to expand or shrink our infrastructure as business requirements change — we now have an easy way to re-architect any of our sites.”
“Without AWS, it would be harder to focus on business initiatives without having to manage hardware and infrastructure,” Brezac said. In addition, the Digital Media division has evolved from administrators to system engineers, growing their skills and providing more benefit to the company.
Discovery Communications particularly values the horizontal scaling that AWS makes possible. “We’re able to scale to each part of the stack horizontally,” says Eric Connell, Senior Systems Engineer. “So if we’re running out of capacity in any piece of the stack, that piece of the stack automatically scales up to increase capacity.”
“Without using the AWS API and services, we wouldn’t be able to provide our staff with the tools we do,” concludes Shawn Stratton, Senior Systems Engineer/Architect. “Our entire continuous delivery system and our development platform are built around using the AWS API.”
Discovery uses CDNs for static, dynamic and API delivery. “Amazon CloudFront was able to offer us the scalability and low latency we expect from a CDN with cost savings of 20–25 percent and better manageability,” Brezac says. “Amazon CloudFront APIs and tight integration with other services like Amazon S3, Elastic Load Balancing, and Amazon Route 53 have helped us easily get started and manage our content delivery.
Blockbuster, owned by the Danish TDC Group, provides streaming movies and TV series to viewers in Denmark, Sweden, Finland, and Norway. It offers transactional video-on-demand services that allow consumers to pay a fee to access content. The service has hundreds of thousands of subscribers in Denmark and is rapidly growing a strong customer base across Sweden, Finland, and Norway.
At A Glance
Blockbuster cut its IT costs in half — while expanding to three new countries — by delivering its transactional video-on-demand content using AWS. Blockbuster delivers thousands of movies and TV series to viewers across the Nordic countries via its streaming service. The company uses Amazon CloudFront to deliver its content and AWS Lambda for serverless compute processes that track customer transactions and help Blockbuster build a detailed picture of customer behavior to personalize services.
Blockbuster, once known for its video rentals, is now a service providing transactional video-on-demand (TVOD) to customers in Denmark, Sweden, Finland, and Norway. Viewers can choose from 8,000 movies and 350 seasons of TV series to watch. Danish company TDC Group, which bought the rights to the Blockbuster name in Denmark and the rest of the Nordics, is the largest telecommunications company in Denmark. TDC Group provides Blockbuster content to more than 300,000 customers in Denmark and thousands across Sweden, Finland, and Norway.
As Blockbuster prepared to expand from Denmark to Sweden, Finland, and Norway, it expected to add thousands of new subscriptions in the first few weeks. The company needed not only to scale to meet this demand, but also to provide fast, high-quality streaming to its customers. What’s more, if the company failed to provide the personalization common in competitive services, it risked not attracting new customers.
Prior to Blockbuster’s expansion, it ran its streaming services in an on-premises data center in Denmark. It was clear, however, that this kind of architecture couldn’t support the expansion to the other Nordic countries.
The costs for building a dedicated data center in each new country were prohibitive, and the maintenance costs to administer an on-premises solution were an additional overhead that Blockbuster couldn’t bear. Henrik Loop, chief technology officer at Blockbuster, says, “From the beginning, we discounted using an on-premises solution. We knew that — like other service providers that successfully stream video content online — we could deliver our IT goals using the cloud. As a ‘startup’ within TDC, we felt the cloud would give us the agility, flexibility, and independence we needed.”
Why using AWS Cloud
Blockbuster looked at working with some of the leading cloud-service providers to support its service in the Nordic countries. The firm chose Amazon Web Services (AWS) after a series of meetings highlighted the technical capabilities of AWS to support transactional video-on-demand services. Henrik says, “We saw that AWS is enabling broadcasters and content owners to successfully stream to consumers and develop cloud architectures with a focus on analytics.”
Henrik and a colleague worked together to build the AWS infrastructure that would support the TVOD service.
They benefited from prior experience working with AWS and teamed up with NPAW (Nice People at Work), a business-intelligence company for online media, to develop their services.
Blockbuster’s movies and TV programs are streamed via Amazon CloudFront, a content-delivery network that speeds up the distribution of web content and adds security with DDoS protection as standard. Using AWS Lambda, Blockbuster runs serverless compute processes across AWS Regions and Amazon CloudFront edge locations (using Lambda@Edge) without provisioning or managing servers.
These processes include applying transactional labels to purchases so Blockbuster can track customer activity, and providing personalized recommendations based on previous purchases.
Blockbuster uses AWS Elastic Beanstalk to deploy web applications at scale without IT-management overhead. It also takes advantage of AWS Elastic Beanstalk to automatically provision core AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) to support Blockbuster web applications as they are developed.
The benefits use of AWS Cloud
The Blockbuster service successfully launched across Sweden, Finland, and Norway in the summer of 2017. Within six weeks, 10,000 people had subscribed to the service, and the AWS infrastructure had easily scaled to ensure high-quality delivery for each subscriber. Says Henrik, “We have brought Blockbuster services to new audiences across the Nordic region with AWS. Now many thousands more can enjoy TVOD and see the best in movies and TV in the comfort of their homes.”
It took just nine months for the three-person Blockbuster IT team to deliver the AWS architecture for the expansion. In this time, the team and NPAW also completed the business-intelligence “buckets,” which Henrik says capture data on transactions and give Blockbuster a detailed view of customer behavior. “We’re not just providing a high level of delivery quality with AWS, we are also personalizing our services to provide customers with greater value.”
Henrik is clear that Blockbuster wouldn’t have achieved its goals without AWS. The cost of expanding the service through an on-premises infrastructure was too expensive. “We’ve reduced our IT costs by 50 percent by using AWS to roll out Blockbuster in Sweden, Finland, and Norway,” he says. “Furthermore, we’ve halved IT-maintenance costs — a crucial saving for any organization. The pay-as-you-go nature of AWS perfectly matches the TVOD model, so it allows us to more closely monitor what it costs us to deliver our services.”
Now, we have seen lots of examples of AWS case studies. so can imagine the role of AWS for startups as well as big companies. AWS is working “on a pay as we go” model and its services are amazing, fast, and secure so 50 % market is using its resources. they provide more features compare to others so startups and companies like more and use its services. that’s reason AWS is a pioneer in Cloud World.
So I hope you learned something new from this article.