DevOps & Microservices Showcase Dublin, 24 May 2017, Dublin
DevOps now is in its seventh year of practice but the old architecture is no longer able to support the speed of delivery and so needs the additional help of Microservices and Docker for incremental change. But adopting a new capability requires a plan that includes people, process and technology.
Microservices architecture helps to deliver easy testing, fast and deployments and overall agility. It’s also fairly complex—so to successfully implement Microservices, you need to understand the core concepts behind this approach.
This event is for you:
Joint morning session with Agile Showcase & Agile Showcase
Agile, Testing and DevOps: Are they a Separate conversation or a progression of capability?
DevOps, Testing and Agile have shared environments that facilitate working together. These three methods are more than simply adopting new tools and processes and the synergy involves building a development and a stable Continuous Integration (CI) infrastructure, as well as an automated pipeline that moves deliverables from development to production. They can work together and the entire build process should be transparent, and it should enable and support development and operations. This transformation depends on: significant changes in culture; roles and responsibilities; team structure; tools and processes.
The Round Table session is the last of the joint morning session with the other two co-located events. This session is for 45 minutes of which there will be around ten minutes for a general summing up at the end. The speaker at each table will have a set theme and delegates join any table that they are interested in. They are given all the topics with their joining instructions and again at the time of registration and so make their choice on the topics that they want to attend. This is a discussion group and so no presentation slides are necessary, but please submit a topic if you would like to chair a discussion on a topic related to Testing, Agile and DevOps.
Benefits of attending:
The speakers at this conference will follow the process from development to production. They will:
Mr. Patrick O'Beirne, Managing Director, Systems Modelling Ltd
Ken Thompson, Managing Director, Dashboard Simulations Ltd
1. To review where GBL can provide the best returns in terms of developing your technology leadership and your team leadership skills.
2. To identify the key foundations and pitfalls to avoid for the successful adoption of GBL.
3. To illustrate GBL with real project examples and live game participation with the audience
* The pros and cons of GBL
* The key foundations for successful GBL
* The Different types of Game
* Top leadership skill areas for GBL illustrated with real projects and Live audience interaction:
— Project/Team Managemen
— Commercial Acumen and Business Awareness
— Collaboration & Competition
— Conversational Skills
* Q&A and Discussion
Martin Gutenbrunner, Technology strategist, Dynatrace
Devs have IDEs with code completion and syntax highlighting, version control, Unit tests, CI/CD, pre-prod environments, and – as of lately – even microservice platforms like RedHat’s OpenShift, Pivotal’s CloudFoundry or Microsoft’s ServiceFabric (just to name a few). Ops have logfiles, …, charts, …, uhm, and, …, did I mention logfiles? This talk focuses on what DevOps does or should do for Operations (which includes the Operating Coder). For one, this means tools we’re still missing (especially in the open source space), plus things that developers should do to support the non-coding fraction as well as possible. After all, it’s their task to operate apps and services they did neither plan nor architect nor develop.
Dave Snowden, CTO Cognitive Edge
Too many methods and techniques in software development are simple recipes derived from limited and partially understood cases. They treat the organisation as if it was a complicated machine rather than a highly complex and ever shifting, frequently fragile ecology. This presentation will introduce the award winning Cynefin framework and will introduce ideas from complexity science, biology and the cognitive sciences which allow us to manage conditions of inherent uncertainty.
Chris Dare, Senior Security Engineer, Abide Financial
The continuous delivery pipeline is the backbone of Devops, and can be used to deliver software at a pace that will leave traditional security testing behind. Alternatively, security or devops teams can use continuous delivery to their advantage and make security an integral part of software delivery without slowing the pace.
Eamonn Powers, Senior Sysadmin/Researcher/Software Engineer, TSSG
With the average lifespan of an ICT service at 4 (?) years and staff tenure between 3 and 5 years, the possibility of finding you’ve got a legacy IT service and nobody on staff to support it is ever increasing. Most organisations address this with risk analysis, succession planning and service migration.
However, there are still services that escape this list through lack of knowledge, poor documentation or rapid changes in team. What happens when there’s sole ownership or very low bus factor?*
How do we cope with this environment? Standard language, cross discipline teams with customer involvement, easily verifiable system integrity checks and knowledge capture.
* Bus Factor: The bus factor is a measurement of the risk resulting from information and capabilities not being shared among team members, from the phrase "in case they get hit by a bus". It is also known as the lottery factor, truck factor, bus/truck number or lorry factor. https://en.wikipedia.org/wiki/Bus_factor
Lianping Chen, Senior Software Engineer, Paddy Power Betfair
Continuous Delivery (CD) is a relatively new software development approach. Companies that have adopted CD have reported significant benefits. Motivated by these benefits, many companies would like to adopt CD. However, adopting CD can be very challenging for a number of reasons, such as obtaining buy-in from a wide range of stakeholders whose goals may seemingly be different from—or even conflict with—our own; gaining sustained support in a dynamic complex enterprise environment; maintaining an application development team's momentum when their application's migration to CD requires an additional strenuous effort over a long period of time; and so on. In this talk, I will present several strategies to overcome the adoption challenges.
Peter Elger, CTO, nearForm
The microservice architecture is a powerful way to structure large scale Node.js systems, providing a component model that scales both at the human and system levels. To fully gain the benefits of the microservice architecture, you need to move to a continuous deployment work process. It should be possible for developers to write the code for a new feature in the morning, and deploy it to production in the afternoon. This high-speed development cycle allows you to meet business goals much more effectively.
Continuous deployment brings risks. You can no longer manage those risks in the traditional manner. There is no multi-month release cycle to fall back on. The familiar work categories of development, quality assurance, information security, and operations no longer apply. Instead you must learn to live with the Devops approach. But giving developers the keys to the server does not automatically imply less downtime, even if you hand out pagers with the keys, and adopt a "you build it, you run it" philosophy. Even when this approach does work, it is notorious for burning people out. Is there a sustainable set of work practice that reduce the risk of continuous deployment to acceptable levels.
The solution is to take a scientific, engineering based approach, grounded in reality and driven by actual business goals. First, quantify the business objectives, both in terms of the value that you intend to create, and in terms of the cost-benefit of failures. There are always going to be failures, so what's your error budget - how much downtime is acceptable? Second, control the production system by making all changes incremental - you are only ever allowed to add or remove a single microservice instance. Nothing else. Microservice instances are built as immutable artefacts - changes to production are not possible. To change behavior, you must deploy a new instance. This gives you a definitive history of the system. It also gives you a way to measure risk!
Each microservice instance change can be measured. By connecting your quantified business goals to the technical metrics of the live system, you can verify each change. Does it improve the metrics or harm them. If it does harm them, then rollback the change. The rollback is not a major event - it is just the deactive of a single microservice instance.
By using immutable microservice instance as the unit of deployment, you enable the use of many risk reduction strategies without needed to build custom implementations. From partial deployment, to canaries, to bifurcated verification, to blue-green deployments, and many others. All of these deployment models are defined as operations on microservice instance activation states. By connecting the state of the system with your business goals, and by operating at speed within a defined error budget, you can fully control the risks of microservice deployment, without need the overhead of traditional release planning.