To support the delivery of technology to its many customers and platforms, Thales wanted to sharpen its delivery process, moving from VMs to Containers and from waterfall to DevOps. This requires changes in culture, skills and tools. In particular Thales wanted to help its partners to deliver containerised applications. The solution was to build a Stack-in-a-Box using OpenShift as a container platform. The problem was that this needed to be installed and unpacked into the partner ecosystem without any connection to the internet. The solution was to package up all the installation media and tools in a set of VMs along with some sample applications to show how developers should work. This approach worked well; however, if we were doing it again we would want to use OpenShift 4.x and make more use of Operators. See also the video of our session from the Red Hat Forum 2020 https://www.estafet.com/post/red-hat-forum-2020
Background
Operating across 68 countries, Thales Group works to keep the public secure, guard vital infrastructure and protect national security. To support the delivery of technology to its many customers and platforms, Thales wants to sharpen its delivery process, moving from VMs to Containers and from waterfall to DevOps. This requires changes in culture, skills and tools.
The brief was to create a Stack-in-a-Box (SIAB): a DevOps environment that partners could use to deliver containerised solutions. To make it easy to consume, it had to bootstrap from nothing and with no access to a network at any time. The solution therefore had to include all the install media so that it could unpack and install itself on the target system provided by the partner company. We also wanted a low barrier to adoption: Developers required sample applications which followed best practice and which they could copy and extend.
The Environment
We decided that 3 Virtual Machines would be required to facilitate the installation and creation of the development environment.

The Environment
The Infrastructure VM contains all the libraries and images needed to create the development VMs – it can be connected to a network to download all required packages and then disconnected before use. The Infrastructure VM is only needed at the time of installing the environment.
The Deployment VM (labelled OCP in the diagram) hosts the OpenShift containers that are used for the DevOps pipeline which is part of the sample application to be provided alongside the VMs. We chose an all-in-one server installation of OpenShift 3.11 which is installed via an Ansible script as part of the bootstrap process. The Ansible script itself forms part of the documentation as it describes how to install OpenShift. We considered options such as OpenShift 4 but at the time of development this did not permit the disconnected installation that we required. Further elements of the installation process involve the creation of several containers holding Jenkins, Sonarqube, and Trivy servers as well as a Kafka installation – all used by the sample application. Alongside OpenShift we also install Gitea and Nexus on this VM.
The IDE VM can be used to run as a development UI for the applications built and hosted on the OCP VM. For this reason we included Eclipse. We pre-created a relationship between the Eclipse IDE and the sample repository hosted in Gitea in the OCP VM. We also created Webhooks within the Gitea repository so that changes made within Eclipse can be built automatically. As with the other scripts the webhooks form part of the documentation in terms of best practice.
The Jenkins, Gitea, and Sonarqube consoles can be accessed from the IDE VM via bookmarks pre-installed in the browser.
An advantage of the separation between IDE and Deployment VMs is that future scenarios could use a cloud-based Deployment environment if such a solution becomes possible. Alternatively, the users could provide their own IDE locally connected to the OCP VM following the examples that we have provided.
The Sample Application
Alongside the VM-based development environment we provide a sample project. This example project is intended to represent an approach to CI/CD using a Jenkins build server hosted in an OpenShift container. The Jenkinsfile at the top level of the project controls the order of the build and will be explained below.
The project works on two levels. Apart from the Jenkinsfile which explains how to create a continuous build pipeline the project is a Spring Boot Java application that integrates with a Kafka cluster running in OpenShift and adds messages to a Kafka topic. Kafka is one of the target technologies used by Thales.
In the full Stack-in-a-Box solution the project is hosted in a Gitea server running in the Deployment VM. Deployed in the OpenShift cluster are a Jenkins server, a Sonarqube server, a Kafka installation, and a Trivy container.
The Jenkins pipeline
The order and steps of the build are defined in the Jenkinsfile at the top level of the project. This file represents a declarative Pipeline for Continuous Build and Deployment.
The Jenkins Pipeline syntax is similar to Groovy with some exceptions. The Jenkins server will interpret the steps in this file and manage the build accordingly. The example file can be used as a template for building other pipelines. The Jenkins Pipeline consists of a high level element pipeline{} which encloses the other elements. Within this there are other blocks such as stage{} which can contains steps{} elements.
pipeline { agent { any {} } environment { // Get the maven tool // NOTE: 'M3' maven tool must be configured in global config def mvnHome = tool 'M3' def VERSION = readMavenPom().getVersion() } stages { stage('First Stage') { steps { echo 'Beginning pipeline!' echo "pom version is ${VERSION}" echo "jenkins build is ${BUILD_NUMBER}" } } ...
Sample Jenkinsfile (partial)
The diagram below shows a schematic of the Jenkins pipeline included with the project.

Jenkins schematic
The Jenkins server within the OCP machine has been installed with the Gitea plug-in. By using this plug-in the pipeline can be ingested and web-hooks created so that when the git code is pushed from the IDE to the server a build will be triggered automatically.
The steps in the provided example pipeline are as follows:
- Jenkins will pull the code from git when the build is triggered whether manually or via a webhook
- The next stage runs automated unit tests via the Maven test command
- Next we run a Maven target that invokes the Sonarqube scanner plug-in to analyse the code. This Sonar scanner plug-in for Jenkins was pre-installed and invokes the Sonarqube container which is also deployed in OpenShift.
- The next stage runs a Maven command to build the fat jar representing the Java application
- Next we check to see if a source to image (s2i) Build Configuration (bc) already exists in the OpenShift project. If not the step will create one using the OpenShift plug-in for Jenkins
- Next we use the Build Configuration just created and add the fat jar created earlier to create a new image for deployment using the OpenShift plug-in for Jenkins
- Next stage checks to see if a Deployment Configuration (dc) already exists in the OpenShift project. If not it will create one using the OpenShift plug-in
- Next we tag the latest built image with a tag representing the Maven version combined with the Jenkins build number using the OpenShift plug-in. This will uniquely identify this built image
- Finally there is a stage that invokes the Trivy server deployed in the OpenShift container. This will scan the image we have built for vulnerabilities.
After testing, the container image is ready to be sent back to Thales.
The success or failure of the Jenkins build is automatically fed back to the gitea server by the Jenkins build process and is visible on the gitea console which can be accessed via a bookmark pre-installed in the browser of the IDE VM.


Gitea notified of the Jenkins build status
Similarly the results of the Sonarqube analysis can be accessed from the Jenkins console

Access the Sonarqube results from Jenkins
The Java Spring Boot Kafka application
The Java application assumes that a Kafka cluster is running in the OpenShift cluster. This was set up as part of the OCP machine build.
Within the project there is a file named application.yml in src/main/resources which contains properties that are used by the main application to build the relationship with Kafka including the address of the bootstrap server and the name of the topic to be used – greeting-topic.
spring: kafka: consumer: group-id: tpd-loggers auto-offset-reset: earliest # change this property if you are using your own # Kafka cluster or your Docker IP is different bootstrap-servers: my-cluster-kafka-bootstrap.kafka.svc:9092 tpd: topic-name: greeting-topic messages-per-request: 10
application.yml file
The main application is contained in the com.example.demo package under src/main/Java
ExampleApplication.java – this class defines the application using the @SpringBootApplication annotation and defines the ProducerFactory and KafkaTemplate that will be used to send messages to the Kafka cluster
@SpringBootApplication public class ExampleApplication { public static void main(String[] args) { SpringApplication.run(ExampleApplication.class, args); } @Autowired private KafkaProperties kafkaProperties; @Value("${tpd.topic-name}") private String topicName; // Producer configuration @Bean public Map<String, Object> producerConfigs() { Map<String, Object> props = new HashMap<> (kafkaProperties.buildProducerProperties()); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class); return props; } @Bean public ProducerFactory<String, Object> producerFactory() { return new DefaultKafkaProducerFactory<>(producerConfigs()); } @Bean public KafkaTemplate<String, Object> kafkaTemplate() { return new KafkaTemplate<>(producerFactory()); } }
Greeting.java – this class is a simple Data class that is used to define the greetings objects that will be sent to Kafka
public class Greeting { private final long id; private final String content; public Greeting(@JsonProperty("id") long id, @JsonProperty("content") String content) { this.id = id; this.content = content; } public long getId() { return id; } public String getContent() { return content; } @Override public String toString() { return "Greeting::toString() {" + "content='" + content + '\'' + ", id=" + id + '}'; } }
GreetingKafkaController – this class acts as a controller for the RESTful service using the @RestController annotation. This will take HTTP requests and turn them into greetings to be sent to Kafka. This class contains two RESTful mappings – a GET and a POST mapping. In this case the GET mapping is a dummy as the Kafka stream will be operating in real time and its listeners will receive messages as they are POSTed. The POST mapping will take an HTTP parameter and generate the greeting before sending it to Kafka using the Template defined in the main Application class.
@RestController public class GreetingKafkaController { private static final Logger logger = LoggerFactory.getLogger(GreetingKafkaController.class); private final KafkaTemplate<String, Object> template; private final String topicName; private final AtomicLong counter = new AtomicLong(); private static String messageTemplate = "Hello, %s!"; public GreetingKafkaController( final KafkaTemplate<String, Object> template, @Value("${tpd.topic-name}") final String topicName, @Value("${tpd.messages-per-request}") final int messagesPerRequest) { this.template = template; this.topicName = topicName; } @GetMapping("/greetings") public String getGreetings() { logger.info("Messages received"); return "Hello from Kafka!"; } @PostMapping("/greetings") public String postGreetings(@RequestParam(value="name", defaultValue="World") String name) { Greeting greeting = new Greeting(counter.incrementAndGet(), String.format(messageTemplate, name)); this.template.send(topicName, greeting); logger.info("Message sent: " + greeting); return "Message sent: " + greeting; } }
Conclusion
The project worked well and fulfilled the brief. For example, we were able to bootstrap environments on client environments with full CI/CD using OpenShift 3.11.
As expected we found that our requirements were atypical for the majority of OpenShift deployments. Almost everything is always connected in the modern IT environment but we had to make everything work disconnected. The important thing was to ensure that all required packages and images were available on the Infrastructure VM and there was a certain amount of trial and error to this as dependencies were not always clear at the outset. This particularly applied to things like the plug-ins required by Jenkins to interact with the various other elements of the solution – all plug-ins (with correct versions) had to be downloaded in advance of the Jenkins pod being created and then copied into the plugins folder on the shared drive so that Jenkins would find them when restarted. Similarly the image vulnerability scanner (Trivy was chosen) had to be created along with its database in advance.
Generally, it is really hard to resolve problems offline, especially if they need more dependencies to debug. Technically, everything is “possible” to get working offline; practically, if it is not documented then it is not supported.
We expect this might have been an easier (but still not simple) project on OpenShift 4.3, which has better documentation for an offline install. We also think we could have made better use of Ansible to automate the creation of the environment including Operators to build and monitor the various OpenShift pods. A second iteration of this project would definitely benefit from these technologies.