Our primary assumption when we develop our software is that what the client cares about most is quality, because it directly translates into lower operating costs and better perception of the client’s company by its customers. All the processes we are responsible for have been designed to prioritise quality.

We specialise in mass capacity systems which can process enormous amounts of data fast or process smaller amounts of data in an extremely short and often guaranteed time (e.g. transaction systems). Some of these solutions naturally function in the broadly understood area of Big Data / Fast Data.

Once the primary criteria are defined as “quality” and “mass capacity systems”, it quickly becomes clear that the limiting factor is always the time of system development. The challenge before us, then, is how to provide the client with a high-quality system capable of processing massive amounts of data that can be developed and maintained within a period reasonable from the business perspective.

When developing our solutions, we also have to consider their lifecycle. Our systems are designed to handle the most demanding tasks, and their lifecycle may be as long as 20 years. That is why important factors include not only the cost of their development but, most importantly, the ease of extension/maintenance/support. This means that the system has to be designed for long-time, troublefree use and equally long and continuous period of further improvement.

The high quality of the system and the need to ensure low costs of continuous improvement and maintenance combined with the short development time are the reasons we have decided to use most of the software engineering solutions described below.


We develop, implement and maintain software using a hybrid methodology based on the best practices of these leading global approaches.

During software development, particularly at the quoting stage, it is impossible to avoid the conventional “waterfall” approach (analysis, design, implementation, tests, installation, maintenance). This approach is used in order to determine the estimated costs of the project and its time schedule. The values determined in this way are used as the basis for future arrangements, primarily with respect to milestones. At the quoting stage, since the detailed analysis and design of the system are not yet available (they are not prepared until the contract is signed as a result of the quote), it is necessary to refer to general assumptions and arbitrarily resolve any uncertainties and ambiguities in the documentation available at the time.

Still, already at this stage, we assume that the actual implementation of the process with respect to milestones will take place using Agile methodologies. This way, our client will be able to depart from the stringency of waterfall plans. The system can be developed, implemented and maintained in a fully flexible way to accommodate the rapidly evolving business environment, modifications of direct functional requirements and even changes in the budget and schedule. We understand Agile as the agility of our communication with the client, designed to specifically customise the product/service to the client’s expectations and to the conditions at a particular time.

Agile methodologies are inextricably linked with our corporate DNA: application support and development are a key issue in our case because there are now certain applications that we have supported for over 20 years. This would not be possible without the agile approach.


Micro-services have been the cornerstone of our approach to software development for over 20 years. We used this methodology before it even had a name. This meant that we had to develop our own solution in this area. Today, after over 20 years of development, we are proud to have chosen this approach, which has become one of the leading trends in the development of today’s computer systems.

From the very beginning, we have consistently developed the “Atom” transaction and payment platform, which provided unparalleled capabilities for running tens or hundreds of thousands of micro-services.

The Atom platform:

  • radically accelerates development by enabling parallel work by the programmers;
  • makes micro-service testing much easier;
  • enables easy and flexible building of complex and comprehensive services (“chemical compounds”) from simple and clearly defined micro-services (“atoms”);
  • has integrated mechanisms for clustering, monitoring, SLA, COB, security, etc.;
  • can function fully automatically in the unattended mode.


Since we quite frequently develop software as an outsourcer (where both the development and later support of the system are our responsibility), we almost always use DevOps mechanisms. In our case, this entails compliance with the following principles:

  1. Coding based on the best, proven solutions and technologies. Using version control systems (GitLab) both for the code of the application and the database.

This is important because the database layer in mass capacity systems is built with particular care in order to ensure maximum performance and maximum scalability (which is frequently the more significant of the two). That is why the version control system also has to control the versions of the code and the structure of the database itself.

We use code review for key elements of the code. This is done in part within the developer team, but we also have our own dedicated unit that is responsible exclusively for the assurance of correct code quality. This unit is also responsible for control of the entire code of all new employees during their adaptation stage.

The configuration of the version control system and the “merge” procedures enable us to, on the one hand, ensure uninterrupted development of the application, and, on the other hand, routinely troubleshoot any issues.

  1. Continuous Building – the created source code is subject to the building process – both with respect to artefacts and the application itself. Internally, during software development, we use applications in the containerised version. That is why, after the application is built, we create the Docker container image and the full environment based on Kubernetes orchestration.
  2. Continuous Testing – the environment developed as described above is subject to automatic testing mechanisms. On the one hand, this means that the unit tests are automated, but, on the other hand, we also use conventional API or user interface tests. The purpose of automatic tests (created by an independent team) is to conduct both regression tests (for functions from previous sprints) and tests of the current sprint. These tests are performed before the system is sent to the manual testing department. This way, we can prevent the common issue that troubles a large number of implementations – recurring errors appearing during the repair of other errors.
  3. Packaging/Delivery – since we use containerisation, all of our solutions are prepared as finished containers, which can be downloaded by the client directly from our internal repository. This makes software updates quite easy. Delivery processes become much easier even for clients that have not yet implemented containerisation.
  4. Configuration – since we use Kubernetes (and infrastructure-as-a-code tools where necessary), the prepared application is fully configured as a working set of the necessary servers interrelated with each other. In a typical case, the implementation of a new sprint only requires the client to perform a simple refresh of container images and restart Kubernetes orchestration.
  5. Monitoring – the application run as described above is continuously monitored. This is particularly important to the assurance of a suitable SLA level with respect to troubleshooting.


Usability and reliability of the software is not everything, however – today’s applications have to be functional, intuitive, ergonomic and adapted to the particular user.

We use the user-oriented design methods and advanced tools for the development of web applications to create high-quality, sophisticated, dynamic and fully interactive user interfaces for our clients, making sure that they are both functional and friendly. We can also build such applications in versions that can easily be used not only on PCs but also on tablets and other mobile devices. This is in line with the trend we have observed, i.e. to – instead of building mobile applications as such – create universal versions of web pages that can be displayed in a special way on portable devices.


In order to ensure appropriate software quality, our software is tested on many levels:

  • unit tests – created directly by the programmers for all three layers of the application;
  • manual tests – performed by the Manual Testing Department;
  • automatic tests – performed by the Automatic Testing Department and submitted to the Developer Department.

The mechanism of so-called unit tests means that our applications are provided with special functions used to test small, individual units of the software. For every job, the programmers create special test procedures, which are subsequently implemented in a fully automatic manner. This way, we have been able to significantly improve software quality by automating regression tests – verification of whether an error had ever occurred before and if it was fixed and did not reoccur.

By using the database testing mechanism we can automatically test the quality of the stored procedures in the database. These procedures are the foundation for the remaining part of the application – that is why we use tools that can automatically perform an enormous number of tests and verify whether the stored procedures have been prepared correctly, even after the tiniest changes. Tests on this level can be understood as tests of the database layer in the conventional three-layer architecture of the application.

The testing process is complemented by the mechanism of testing the programming interfaces for the application (REST). This means that, before testing the user interface, we automatically verify the programming interfaces (which run stored procedures and use the units of the applications tested during unit tests). Tests on this level can be understood as tests of the business layer.

The above-mentioned test mechanisms are combined with tests of the user interface for the web application carried out by the testing department using, in particular, the Selenium/Katalon software – a set of tools that can save test scenarios and then run them automatically whenever the application is modified in any way, even if the particular modification seems to be completely unrelated to any other components of the application.

We have implemented a solution that can not only automatically build the application from the source codes prepared by the programmers but, most importantly, automatically run all of the tests described above (Continuous integration).

The goal is to make sure that every successive modification of any part of the application source code, no matter how small, triggers all automatic tests: unit tests, database tests (stored procedures), tests of the business layer and tests of the user interface.

The above-mentioned tests, depending on their position in the process, may, in particular, be used as regression tests (by testing functions completed in previous sprints during the current sprint), acceptance tests or any other type of tests.

In order to ensure maximum software quality, the Developer Department, Manual Testing Department and Automatic Testing Department operate independently of each other. They share the test scenarios, however, in order to ensure maximum possible test coverage of the application. The test scenarios are strictly defined. This way, all of the indicated teams can successfully contribute to the quality of the final application.

Due to the expected high quality of the software, the tests are a central component of the entire process. Moving away from TDD (Test-Driven Development), we use modified BDD (Behaviour-Driven Development). It is “modified” because we believe that it is not only the development of the software that should be made subordinate to the expected behaviour of the system. In our opinion, all processes that concern the relations between the software provider and the client should be subordinated to this purpose.

That is why we prefer to refer to it as Behaviour-Driven Relation (BDR).

In BDR, we assume that all of the relations between us – as the supplier – and the client should concentrate on the specific functions/behaviours of the application expected by the client. Moreover, these requirements should be formulated in the business language that is easiest for the client to understand. This is a clear reference to the DDD (Domain-Driven Design) vision.

We know, based on our long experience, that the clients are usually much more able to describe specific examples of how the application should work rather than define specifically how it should function, as is the case in the “conventional” analysis. Such examples of system operation can, on the one hand, be an excellent input for software development, a test specification (for manual or automatic tests) or even an excellent supplement to the documentation (because they are created using the language and terms the client understands).

This means that we define the behaviour of the system already when the requirements for that system or, more narrowly, milestone, sprint or even a single function are being defined, e.g. using the Gherkin notation. Once we are aware of how the application should behave, already during the analysis/design stage, we can make the relevant arrangements with the client in a completely different way. This has a radical effect on our relations with the client – the arrangements now concern not a single deliverable but, rather, a specific function or behaviour that the client expects from the business perspective. We communicate with the clients not only in the language typical of software engineering, using terms such as “deliverables”, “sprints” or “backlog” but, most importantly, in a language that describes the specific effects our clients wants to achieve using the application.

This transforms the typical provider-client relation into a genuine partnership, where both parties concentrate on the behaviour of the system, making its quality the top priority.

Coordination of work & knowledge sharing, SLAMORE

In order to ensure that the work is correctly coordinated, despite the large number of different issues that have to be managed in each project, we have consistently used the solutions of the global leader – Atlassian:

  • Jira is our key system used for defining tasks, monitoring the development process, monitoring workflow or defining requirements or even for communication with the clients during the period where they are invited to submit their comments and during system development. We use Jira for comprehensive project management and timely completion of the individual jobs without compromising quality.
  • Confluence is our key system for the creation, storage and sharing of knowledge. We use it both for knowledge about the particular system (that is how we create and update various types of documents) and operating knowledge (collaboration procedures, internal regulations, etc.). It is a specialised collaboration server that the project groups can use to create and share documents, ideas and knowledge. The result of the implementation is an internal portal used to store documentation and share knowledge connected with specific jobs for the clients, with breakdown into individual cases and their history.

Both of these tools are complemented by a set of plug-ins that add the necessary elements to this functionality (e.g. accounting for working time, defining tests, etc.).

Our clients can expect comprehensive support after their system is completed and implemented, including the following:

  • Being ready to provide and providing in-guarantee services by eliminating bugs and providing technical support for the system
  • Readiness to provide out-of-guarantee services in order to modify and develop the system.
  • In order to offer our clients transparency and high-quality after-sales service, we provide a highly intuitive Internet ticket system dmt@work, integrated with Jira, which can be used not only for registration and tracking the progress of a particular job but also for the two-way exchange of information, observations and comments concerning the particular issue at any stage. dmt@work is our interactive and dynamic platform for daily collaboration and information flow between the dmt team and our client.


Go to the top