Re:Invent 2018 – Lessons Learned and What’s in Store for Your Agile/DevOps Journey
Webinar Transcript
Myriam –
Good morning, good afternoon, good evening to all. We’re very excited that you were able to join us today on this expert panel webinar, AWS re:invent 2018 – Lessons learned and what’s in store for your agile and DevOps journey. My name is Myriam Sanie and I’m with the marketing team here at Cprime. And now without further ado, I’ll pass the baton on to a Dan Frost, our moderator for today. Dan, the floor is yours.
Dan –
My name is Dan Frost and I’m Director of Partnerships at Cprime. Also with me is Kalpan Raval, Global Ecosystem Leader at Amazon Web Services and also here with us is Brandon Cipes, VP of DevOps at Cprime. We’re just going to get us through kind of what’s new and exciting, how it pertains to a lot of folks that we would consider our customers. AWS re:Invent is a incredibly large conference which showcased the deep application of AWS technology. So today, we’ll help distill what took place at the event. Cprime is an AWS partner. We are one of the largest Platinum Solution Provider and offer a wide variety of technical solutions in Agile and work management, and downstream in CI/CD and Build and Deploy technologies, monitoring, automating testing and a lot of these technologies to take advantage of or run directly on AWS services. We work in a number of significant verticals. We really haven’t found a customer we haven’t been able to help just because of who they are or their niche vertical, but we work with the biggest of the big, just like Amazon does as well. So, the conference was an overwhelming display of technology and technology applications. How would you describe it, Kalpan and Brandon?
Brandon –
It’s massive, something like 50,000 people this year across half of the hotels in Las Vegas. There must have been over a thousand sessions. It’s THE conference of the industry because cloud infrastructure touches and permeates some of the different things, such as security, blockchain, automation, migration, DevOps, just so many different things. So it’s really a massive get together.
Kalpan –
I agree. To Brandon’s point, I think the most valuable time spend as we hear from our customers and partners in terms of the value we get from the education and the knowledge about: how do we best use these technologies and services, and also know about what’s coming through and that’s a segway into what we want to talk about today, Dan and Brandon. So thank you for having us here.
Dan –
Excellent. We attended some of the keynotes as well and I think it was like seven 60 foot wide 30 foot high screens in a packed room. Cprime was there at the Atlassian booth, who have strong ties with AWS. So you saw the bullet point before that there was a large number of updates and announcements that caught our attention. Aurora Global Database, Migration Hub, Data Sync, Global Accelerator, Control Tower, rollout of the private marketplace, and all these things have such broad implications. So Kalpan and Brandon, were there any kind of product updates or announcements that caught your eye in particular? What were they, what was interesting about it to you and how do you think that might be interesting to the folks who are on this call?
Brandon –
There was the AI controlled race car, which was kind of cool. Maybe not as pertinent to a business type meeting, but definitely something people should check out if they’re interested. And to that point, you know, if anybody on the call does have questions about either they weren’t there and they’re curious what things piqued interest or were there and trying to understand exactly what the ramifications are, please, we’ll be taking questions all through this, but one of the big ones that I find really appealing was the Global Accelerator. And so we have a lot of clients and you know, you’ll see this in Atlassian where teams are in all sorts of different locations, right? Maybe your headquarters is in SF and you’ve got teams in New York and the UK and all over the world. But the software itself isn’t configured in a fashion that you can have nodes or multiple servers in different locations against the same data set, so latency becomes a pain, right? You know, somebody on the other side of the world, they’re not huge data payloads, but it’s enough that it can be a bit of a pain and Global Accelerator is basically starting to do caching on a content level so that as long as you’re staying on the AWS network, you’re in the various regions, you’ll be able to make sure that that traffic is in fact getting accelerated and pushed forward as quickly as possible, which is just a huge benefit for a variety of applications that don’t have that native multi-region network configuration available to them. So I suspect this is going to be a very popular one with a lot of clients.
Kalpan –
And then, what I would add is just the ability to scale out the internet facing applications is a huge advantage with Global Accelerator, right? If you think about the global traffic management to what Brandon just mentioned, we’ve got traffic management and another differentiator, if you will, is the ability to achieve the weighted load balancing. Advanced routing rules for example, and in terms of performance we talk about DevOps customers, and the closest edge locations now critical to some of our customers as we learned. And how about, in terms of, again, performance, horizontally scaling of a million plus QPS for global IP. Now again, a very high performing capabilities. In terms of availability for example, and you’ll be talking about continuous hdps and tcp health checks, we’ll be talking about high availability through the network zones. AgaIn, really key differentiator here. And then isolating network design, for example, that’s key. And then if you look at some of the DDOS protection via AWS shield, right? Again, this is a key differentiator of these list services. We talk about how do we make it simple. We think about single global IP that simplifies network design at the same time with our significantly increased user error of managing the multiple IP and DNS entries. We can easily move applications between the regions and availability zones out, updating the public interface. Again, in the context of DevOps migration, these are some of the key capabilities that are included as part of AWS and unlock more in terms of the key features that you might be able to refer to online. But I agree with Brandon on the Global Accelerator.
Brandon –
And I think it’s actually very similar in many respects to the Aurora Global Database, right? Because we had so many clients, I don’t know how many people on the phone know Aurora, but Aurora is basically Amazon’s ability to host databases and hosting as a service so you’re not stuck with the updates and a lot of the maintenance and all those kinds of things. But Aurora went through versions. Like it started with mySQL and it was kind of like a modified version of SQL that was still mySQL compatible, you didn’t have to worry about that. But the way they separated out the actual query front end of the database from the data management on the back end allowed it to be much more performant. You know, think mySQL, it operates at Oracle speeds. And so when that first came out, which was it’s been a number of years at this point, it was very, very exciting for like really heavy duty database workloads, but you still had the same issue that you could do real time replication, other availability zones and things like that, but not necessarily across regions.
And so if global database, and a lot of the same sense of what global accelerators trying to do now at the data level, you can have that replication and all of your read activity happening across regions all over the world, which is really great when you’ve got big massive datasets that are going behind your applications and that kind of performance improvement, because so many teams, so many more organizations have teams all over the place, I think those things really start to make a pretty significant difference.
Kalpan –
Agree. And then what I would add is just the global database. A single Aurora database can span across multiple AWS regions, to enable fast local reads and quick disaster recovery. Right? so that’s the game in the context of high availability and durability. I think this is a sort of a next set of offerings we came up with and global database is about those services, focusing on really good interoperability.
Brandon –
And so much of it is all interrelated, but there’s so many people who are starting to bring more and more of their datasets into the cloud, but they have a mix. You know everybody’s got some box under a desk somewhere, the old data center and they’re still running Postgres, SQL, mySQL, whatever it is. And it can be a bit of a challenge to live in both of those worlds. Some of the stuff’s behind the firewall and some of it’s over in cloud. And one of the things brought up was the Data Sync product, which is really the idea of just like Aurora Global Databases is using replication to make sure that data is in all these different regions. Data Sync is almost more like a service that’s going to allow you to do that same thing with your own private data sets. I mean private in the sense that they’re in your physical hardware and physical infrastructure and then have those things start replicating out to AWS services. So it’s sort of one of those stepping stones for a lot of organizations that are increasingly going to the cloud or fully have embraced the cloud. They want to be able to use that service to start getting the data up and into Amazon, building your apps or whatever else, that operates against that data in the cloud. And then when you start to think about truly cutting the cord, you’re just gonna be able to turn off the things that you have stored locally because the data’s already going to exist in the new locations within AWS. And you know, it’s just these things come up all the time. It’s one thing to say the cloud is awesome and we really want to go to the cloud. There’s a whole lot of other practical questions of how exactly do you do that and it’s your classic fix the airplane while it’s in midair kind of problem. But, but some of these tools are really making a difference in being able to take that on.
Kalpan –
How I look at it is to manage data transfer service. What it does, it’s trying to, again in the context of migration and DevOps and agility, it’s trying to simplify the moving and copying and potentially synchronizing a large amount of data between the, on-prem storage systems in AWS storage services such as S3, Amazon EFS or think about doing it over the internet and direct connect, either/or. We don’t have to build a custom solution with various tools and then licensing and the expensive management aspect of it, and with the use of commercial network acceleration software and other capabilities. This is a single point where you can simplify the whole process of moving and synchronizing large sets of data. I think this was great again in consideration of, you know, how do I operate these workloads, how do I migrate them and then how do I take care of managing the data and constantly synchronizing at a very high scale. These are the sorts of capabilities we would look at as you think about the process of migrating over the workloads and having to sync the data between the on-prem and the cloud.
Brandon –
Yeah. It’s a lot of stuff. But it’s really interesting the problems that just so many people are facing today as they try to figure out how to move things along. You know, the other piece where we were having some very interesting conversations and there’s actually one presentation done by Goldman Sachs and they talk about how as organizations get larger and larger, you don’t necessarily have one AWS account that has multiple VPC and segmentations by department or application or you know, however your structure may be, but they now more and more people going to multiple accounts, right? One account per department or one account per application stream or whatever else. And as example went, Goldman Sachs was running something like 1200 accounts in parallel now, which is kind of a mind boggling number and a perfect example of the kinds of things you’d go to this conference to hear about, just how other people are handling it.
And one of the other tools that they have brought up was Control Tower, which is this idea of kind of giving you more control and orchestration over how those accounts are created. The standard security and configuration options are going to be set when they’re created so that it can be a little more programmatic, a little more automated and be something that fits into a larger strategy of, okay, maybe you’re farther along in the AWS journey and now you’ve got all this stuff in there, but really how do you practically organize it and segment it so that craziness doesn’t ensue. For anybody who has all their stuff in one AWS account, I’m sure you’ve gone at the end of the month to the billing to try to unwrap exactly what costs go where, and that can be something of an unpleasant experience. So I don’t know what you think Kalpan, but it kind of felt like the multiple account approach was becoming the de facto standard for large installations.
Kalpan –
I think you are right. And so enterprises that are migrating to AWS are typically having a large number of accounts, distributed teams and a number of applications. So given that that’s a regular use case for us. and as you think about our enterprise customers, what they want to do is they want to set up their own environment based on some of the established best practices. At the same time, they want to use AWS services for configuration and provisioning, ongoing management, etc. So what AWS Control Tower provides us is this single location to set up a well-architected multi account environment. With this Control Tower, customers can automate this setup of their environment with identity and access management, they can use the workflows to provision the accounts. Customers can select and apply the fee package policies enterprise wide or a specific set of accounts and these are usual case.
They have distributed teams and they’re able to provision new accounts quickly, while central teams have the peace of mind knowing, I mean this is more or less DIY model with self service, it can be adopted with this account creation process and management of that, while the central teams have that, the new accounts are aligned with the centrally established policies which are company wide and this gives customers control over the environment. So that’s the Control Tower without sacrificing speed and agility. And again, that’s the offer, and what is available as part of the Control tower to the development teams who can design those capabilities in house, with a multi-account environment and like I said, given that they would have a number of teams spread across and then hundreds or thousands of applications, here’s a solution to be able to manage it all in a single place.
Brandon –
Yes, there’s a lot that’s changing very quickly. It’s really interesting to see. And again, please, we’re curious to see what kinds of questions the audience has as far as things you heard or saw there, because there are so many different things and not even necessarily just technology. One of the other things that they had sort of put forth was private marketplace. So I imagine most people at this point probably know the AWS marketplace. It’s just like the Atlassian marketplace or even the one on your iphone or your android where you can download all the different apps that are tied in. But what Amazon started doing, I don’t know, maybe a year or two ago was really doubling down on how many groups were in that marketplace and the ability to have secondary companies sell offerings that were wholly independent of having to absolutely live on AWS. But you could still get it through the marketplace. So one example is Splunk, that a lot of people may have seen. You can get Splunk and have it installed on AWS, on almost like a SAAS type application. But you can also just get Splunk and get your license and put it wherever you want. But in either scenario you can still procure through the marketplace. And what we’ve found more and more people really like about that was, hey, look, I don’t know how many of you have procurement departments. But the nice thing is that once you go through all the work to get a vendor like AWS approved through your procurement manager or your procurement team, then you don’t have to go through those steps again, the MSAs and the NDAs and all the sign offs and everything else.
And so for so many organizations, then being able to get the subsequent applications, whether it’s something substantial like Splunk or, you know, maybe it’s a small little plugin, some small little app that doesn’t cost a whole lot, but you know, you don’t want to have to go through weeks and months of rigmarole just to be able to get some hundred dollar application. So when you’re buying it through Amazon, through that marketplace, it allows you to do it under the same agreements. And that gives you so much more peace of mind and kind of removes a lot of the barriers to entry so that you can pick these things up a lot more quickly. So it’s been very popular with a lot of organizations and sort of the extension of that is now that they have this private Marketplace where depending on who’s controlling your account or organization and within your particular company, you could start to shear rate what’s in that marketplace and at really large organization say, hey, we support these tools but not those.
And really get a very subscript collection of what’s available in that marketplace. You could just say, hey, teams, go buy what you need, tag it appropriately, put it in your respective Amazon accounts. the budget will all wrap up the right way. You still won’t have to go to procurement because it’s all going through the preexisting Amazon relationship. But now you’re almost setting up a store, a menu, if you will, of all the things your team can use, which starts to get really appealing when you talk about, you know, many thousands of individuals where, an email just telling them how to do it, it’s really not going to work as well as just the store that they’re normally used to go into, has the stuff they’re allowed to use.
Kalpan –
Thanks Brandon for bringing this up. Actually, the marketplace has been a growing space for our customers to leverage and procure the software of choice. What we’ve seen over the last few years, is just the growth of a number of ISV and third party software products available, ready to deploy and run on AWS. So while that’s being listed and marketplace offers, and I believe as of last week when we went to re:Invent, we have now about 4,500 unique software products available, able to procure on AWS marketplace. Given that capability and the growth and adoption, what we are hearing from our customers is, hey, this is really useful. In a few clicks of a button I can procure various types of software that I use and my teams would have a need for and I don’t have to, as Brandon pointed out, set up various systems and processes including security and trials and all of the above.
How do we create for our own need a catalog of the products and services that we can consume of of this catalog for our own needs, and that gives rise to what we now call private marketplace. It’s a new feature that allows the administrators to create the catalog of approved vendors and products and services from AWS marketplace. The service users in the same account as an administrator can purchase and launch these pre-approved applications for their projects, and I think using this private marketplace to ensure that there are AWS users and then they’re using the software that meets the company’s procurement, legal and security requirements, and it means it is compliant and having those controls enables so administrators can also define the permissions to certain set of users for set of applications or a group of applications, and they can receive notifications every time a product is added or removed from the product marketplace.
And that’s a level of customization with the company’s branding and logos and other messaging. I think private marketplace is built on top of AWS marketplace, so you get the access to all these software products that we have available and growing as we speak, which makes it really simple to find, buy and deploy the software solutions that run on AWS. So that’s the beauty. And I’m very excited to announce this as I have often felt in meeting with customers, that they’re looking for such solutions, and they’re able to do that now with a level of comfort, level of custom way of operating with their own catalog and definitions with respect to security and procurement standards. When customers look at spending on new applications for sure, but also migration, as one of the pillars is purchase
So i think in those terms, if I’m using Splunk or Informatica or several other software products that are currently running on-prem, guess what? The good news is that these solutions are available as a SAAS on AWS. Now how do I make the switch or make that migration? This is a very important and useful tool to have these solutions launched via the private marketplace and this is the fastest way to adapt to the new SAAS solutions using AWS Marketplace. So I’m pretty excited for customers who are waiting for these capabilities and it’s really easy and simple. A few clicks of a button, and you start running these software solutions using our marketplace. Thanks for bringing that, Brandon.
If you don’t mind, I was going to take us back to a question around Control Tower. One of the questions that came in from one of our attendees is, how Control Tower compares to AWS organization?
So one of the things that Control Tower provides is a well architected multi-account environment. How you should see these new capabilities is that now you can select and apply pre-packaged policies, which could be, let’s say, applicable to a specific set of accounts and then provision a new account quickly with the help of rule-based or specific categories of accounts with the same level of established compliance and policies. The other thing to think about Control Tower is that’s in an environment with a lot of automation baked in, such as, you know, the rules for security, operations and compliance. All of those can fit in here. And you can still manage it large number of accounts, which could be distributed teams and it could also be from the lens of a specific set of applications and then for example, their set of applications which have to meet other GERC and other compliance standards and that’s where I feel Control Tower is very instrumental in bring that automation in the hands of our development teams. Over to you, Brandon.
I think that captures it and it’s not meant to be a replacement for the organization’s construct of those things that exist, but it’s really when you’re laying these things out for the first time that you’re doing it across templates and following and implementing certain whatever your configuration preferences may be so that it’s easier not just to aggregate the data, which organizations sort of helps with but really dictate how you want it done, back to that point of multi and sometimes crazy multilevels of parallel accounts is becoming more and more of a standard practice.
Dan –
Just got another question to stepping back even further to the Aurora Global Database. The question specifically was whether or not it’s synchronous.
So a global database I don’t believe is allowing you to do right against any of these subsequent regions there. They’re still the premise of a master or a main installation and the replication is going out to everything else. So your applications that those endpoints can still be doing all of their read activity off of those replicated instances, but they’re not able to dor ight. That still has to happen back at the master or main instance.
But I will say that it’s kind of crazy because, and just so people are clear, when you start doing that, it does have allowances were even without total outage, but if you just saw performance degradation at your main installation, you can switch to any of those other secondary regions to be your new master and perform the right. And it’s crazy. The RPO time is five seconds and the RTO is one minute. I mean, they’re kind of absurd metrics compared to what it takes to implement these things on your own infrastructure, but it allows for very, very high levels of high availability.
Dan –
I guess before we move on, any other announcements or updates that that interests you guys, Lake Formation Ground Station, anything just maybe not even pertinent but a very unique or fun to think about or talk about.
Brandon –
I really liked the Outpost announcement that they made. So for anyone who didn’t see it, it’s really the idea that most organizations still have some kind of hybrid cloud presence. Again, back to, you might have some stuff in AWS, but you solve a bunch of things going on in your own data centers, wherever they may be without posts, you can start to actually implement AWS services inside of your own data centers. So you could effectively replicate RDS functionality within your own data center, which is a pretty amazing thing. I mean, I think it was only last year where AWS mentioned or announced the partnership with VMware, which really allows you to start replicating a lot of VMware activity and AWS environments and start to bring those two worlds together. But now we’re pushing AWS functionality into private data areas and that’s kind of a game changer.
You can definitely tell AWS is going strategically with being able to enable those kinds of functions and of course it won’t work with all 1200 different things that AWS can do, but I think a lot of the core functionality is going to be a effectively instantiated within these areas so that they’re only training your team on one thing. You’re starting to get familiar with one consistent way of going about the provisioning of databases, compute instances, all those kinds of things. And that’s a huge strategic shift from where things were a year ago and the level to which you could bring these two worlds together.
Kalpan –
I’m super excited about Outposts. As a customer, if you look at Outpost, it brings the native AWS services, the infrastructure, the operating models, for actually any data center and co-location spaces, so that’s one key element with how I look at Outpost to Brandon’s point. And you can use the same API, the same tools and the same hardware and the same functionality across on-prem and the card to deliver a truly consistent hybrid experience. And that’s Outposts. Outpost can also be used to support the workloads that need to remain on-prem either due to low latency or local data processing needs. And this is an ideal solution, if I think about Outpost when we talked about compliance and regulatory requirements to bring that on-prem and cloud experience. This is truly the consistent experience that we want to build, so I’m super excited about AWS outposts. And again this is what we heard from our customers as to how can we bring these experiences as AWS has in their services, how can we bring that over virtually to any other data center, and colocated spaces? Here’s a solution and I’m super delighted that we heard the customers and this is a service that is meant to meet their needs.
Brandon –
One other big angle that we have is we’ve got a lot of our customers are running Atlassian tools, Jira, Bitbucket, Confluence, even Splunk. I’m currently on-prem or in some colocation facility. Are there any patterns that either of you have kind of picked up on or things discussed that re:Invent has influenced for those customers running those products who are considering moving to or migrating to AWS?
Kalpan –
So let me answer this question and then Brandon, you can add on. One of the key items observed as we, get customers who are running a large number of ISV solutions or third party software, and running them, and that are looking at leveraging the cloud in the form of either moving them into the cloud or adapting to the Saas solutions, or some work in between there, they can leverage all the agility and DevOps and key benefits that the cloud delivers. Right? So think about it that you are running in an on-prem environment, Jira in this particular case, and what are the options to leverage the cloud benefits in AWS? So customers definitely have a few options for benefits of migration. They can essentially do the “lift and shift” migration, or rehosting migration, where they can not only leverage the requirements that meet their ongoing infrastructure needs to run those Jira workloads. But also scalability for once in a while, the need to get more infrastructure for the application for a specific period of time, and also to address some seasonality or new product releases and those kinds of situations. They can definitely leverage AWS by doing a rehost, but they also have the ability to adapt to a SAAS solution, which is built on AWS. So you have a few possible options. Where we see a need for us to support or build these foundations is really with these capabilities where we kind of build a playbook, if you will, of things that you need to follow along to migrate the workloads into AWS, whether it’s a rehost, or in some cases or repurchase a, like I said earlier. And what we have done is we curated a set of playbooks, run books and workbooks, which are steps along the way that the customers and our partners who have our customers follow along, so that it becomes a one for sure, repeatable. And it saves a lot of time due to some of the assets and frameworks and best practices that are baked into these methodologies that are part of the AWS supportive playbooks, but also found the best practices from these ids. In the case of Splunk, there are multiple options. Customers have one in terms of guidance, how do I make that decision. Should I be doing the rehosting here, should be considering a fast solution, and review my options in a given scenario. So that’s a bit of a best practices, analysis and framework that we provide and partners such as Cprime are key to leveraging those best practices as they help our customers. The other way to look at is how do I get a to a point where I can start retiring my technical debt, because this is the way to leverage the cloud one way. And then there are upsides. But I’m looking at, in the management layer or even some of the ongoing refresh of technology, upgrades etc,and it’s easier to think about doing these rehosted platform, rearchitect and the purchase of software solutions, and what we thought is AWS and our partners such as Splunk, Informatica, a few others that we create this knowledge base in the form of what we call the library of knowledge and consider creating those patterns that the partners and customers can leverage, I think will be able to serve many, many customers with these assets and knowledge base that then that can help customers not only faster but also with less risk for migration projects.
And that’s the experience that we want to bring here with a partner such as a Splunk and many other ISV solution software they need help with. I think what we call the migration patterns library is a key asset that we can leverage to implement and to scale with a very large number of customers and adoption through our partners. So really excited to talk about the migration pattern library and I think almost every single customer who is thinking about leveraging cloud for their third party ISV software solutions that they’re currently running on-prem, this is the go to place and start leveraging and starting to do the steps that they should do. So thanks for bringing that, Dan. And this is the foundation effort. We have several patterns in the library and are growing that quickly, and a lot of our partners contribute to that as well. We are growing that capability and capacity quite quickly.
Brandon –
There’s one thing you caught my attention of technical debt and everybody has a bunch of technical debt they want to get rid of and everybody has a bunch of operational overhead that’s kind of a pain to deal with regularly. And I think where we’re seeing so many people move is Atlassian and Splunk are two great examples of third party software. A lot of people use it. You don’t necessarily want to change the tools. You just want to minimize the effort to keep them doing what they do so that you can just focus on using them, not maintaining them, and so they each have these two very similar offerings were you can either go to their own cloud services like Atlassian cloud or Splunk cloud, both of which are actually powered by AWS behind the scenes or you just host these things on AWS so that you can throw it on rds.
It’s easier to deal with, you know, scaling up the data based on the need. You don’t have to worry about running out of disk space, all those kinds of things. So we see so many organizations starting to move in that direction because it’s just easier. Again, you don’t want to worry about keeping the lights on, you want to go play with it, do cool things and build neat reports and workflows and all that kind of stuff. So we see a lot of shift there. Yeah. I will say as we talked about splunk, but one of the other announcements that aws had was on Timestream and Timestream is kind of interesting. It is very much a precursor to a competitor with Splunk. It’s all about being able to aggregate a lot of log and other machine kind of information and data, put it in one place and start to do the insights and the reporting and everything else to function and serve as a security tool, help with introspection and your DevOps pipeline, all those kinds of things.
And so it being a new service, I don’t know, it may even still be in beta so they’re probably hasn’t really been very much opportunity to play around with it and see exactly how featured it is just yet. But knowing Amazon, it will probably be gaining features at some absurdly quick rate over the coming months and years. So that’s definitely one to look out for. The pricing model will be interesting to compare as well because tools like Splunk aren’t maybe universally considered the cheapest tools on the planet, but very, very quality tools and so, you know, that’ll just present a decision point for a lot of organizations, if you’re very AWS centric, being able to use their aggregation tool obviously can make a lot of sense.
Kalpan –
I think it’s a highly scalable and fast fully managed time database service. As a customer I see that there is a need to understand the time series data and ability to sequence these data points or times. So, like the data that changes over time, I need some insights in real time to play around and be more proactive in terms of how do I analyze this information and come up with, for example, measurement over time, the temperature measurement over time. These are all simplistic use cases, but I do want to infer and use these insights to make some recommendations to either my end users or customers. And this is the way to go, to build using Amazon Timestream. It’s a purpose built time database service, I would say. So you can collect, store and analyze time series data, that is fully managed, it really operates pretty fast and you know, at operate, and I can day that on your pricing, I would claim it will be much more cost effective as compared to using the relational databases. So I’m pretty excited and thanks for bringing the Timestream, Brandon.
Brandon –
No, absolutely. I think we also had a question kind of about serverless in aggregate and some of the other things with Lambda layers and all of that. And I will be the first to admit that when Lambda was first announced a couple of years back, I was a little skeptical, not necessarily on the technology itself or the ability of Amazon to execute, but just the idea that: could you truly move to a world where you were starting to separate out the servers themselves, forget about servers and EC2s and all that kind of stuff. Literally just have the code sit there, execute. Obviously it’s underpinned by service, but that part becomes completely abstracted away and you become very, very, on demand with everything that you’re trying to do. And so there were some announcements in the last week around Dynamo DB on demand, the simple idea that if you’ve got no SQL work going on or maybe underpinning some of what Lambda is doing, you don’t need to perpetuate that database, let it come up, do its thing and drop off or similarly layers which you know, really is giving you the ability to load in and reference libraries and get to a higher level of complexity within the code that does execute. But what I’ll say coming back to it is after a couple of years now, use cases have definitely expanded quite a bit. It’s not necessarily replacing all of your Java apps tomorrow or anything, but we’re seeing more and more point in time use of this kind of solution for various functions that fit, that need very well; something ephemeral. it’s quick, it does its thing and it’s gone.
Kalpan –
I would call out something very interesting you’ve brought up around Lambda layers. I don’t know if you heard that, but I was feeling, as I was watching these announcements and specifically around Lambda layers, I was seeing smileys and faces and lots of tweets around when I heard the first time from werner file a simple interface to use any programming language or specific language version for developers. And this is the Lambda API and I saw and smileys and great relief on the faces of our software developers. So super excited about that.
Brandon –
It really has evolved how people attack these kinds of things and not just for startups, I think at the rate we’re going. You can start to enterprise licenses with this kind of approach soon. And I know we’re getting a little close to the end. These are some very good questions.
Dan –
So Kalpan, Brandon and everyone who has attended here today. I appreciate your time. I hope that you all gained a little from attending this webinar, even if you didn’t get a chance to attend. Any other final parting thoughts? Brandon or Kalpan?
Kalpan –
If you haven’t been, it’s worth going. They’re going to do it again next year, December 2-6, in Las Vegas. There’ll be a ton of sessions, hundred different vendors and sort of crazy, bizarre laser light show dance party stuff that they do there. They’re kind of famous for their big party at the end of the night.
One thing for sure, Dan, regarding your thoughts: if our customers or partners missed going to re:Invent in 2018, we will soon have all the youtube videos and track sessions and all of the material uploaded on our channel or youtube channel. And I look forward to you joining or subscribing to that and you will be able to consume a lot of deep dive technical and announcements around new services and details around how to apply those. So feel free to join that and consume that information. And yes, look forward to seeing you guys in at the next re:Invent. There are also multiple AWS summits around the world throughout the year. Feel free to participate if it’s around you and it’s convenient. I look forward to seeing you at those AWS events. Thank you for having me here. Again, Dan and Brandon, thank you so much for the partnership and bringing this over to our joint customers.
Myriam –
Well we’ve come to the end of our webinar. I want to thank you so much Kalpan, Brandon and Dan for a great review of AWS re:Invent 2018 and some of the great things that are upcoming in the future and how organizations can solve some of their challenges. And of course, I’d like to thank very much all of our attendees for taking the time today to attend this webinar and we hope you found it of interest and will join us for our next installment where we will be delving into a wide array of topics. Thank you again everyone for joining us today and have a great end of day.