Jump to content

Tech learning roadmaps…


Recommended Posts



JavaScript unit testing frameworks in 2022: A comparison

By Mohsen Taleb |  Posted Oct 6, 2022| 15 min. (3188 words)

Choosing a JavaScript unit testing framework is an essential early step for any new front-end development project.

Unit tests are great for peace of mind and reducing software errors. You should always make the time to test.

But which framework should you choose for your project? We examined 11 of the most popular JavaScript unit testing frameworks according to stateofjs.com, to help you decide which is best for you.

Stateofjs collects data from thousands of front-end developers in its annual surveys. Here’s their most recent ranking of most popular JS testing frameworks, sorted by their usage. We’ll go over them one by one, and try to understand their pros and cons.

Framework popularity over time

JavaScript moves fast, but JavaScript developers move even faster! As JavaScript keeps evolving, new tools are introduced and may outperform their ancestors. That’s why we should always keep an eye on the changes and choose the framework that fits best in our development process.

In this post:



Used and recommended by Facebook, Jestis officially supported by the React dev team. If you’ve been a programmer even for just a couple of years, you’ll understand how important it is to choose a framework with a strong community and active support. When a tool is also backed up by a giant like Meta, there’s considerable extra comfort and peace of mind. This simply means if you run into a problem and couldn’t find a solution in their comprehensive documentation, there are thousands of developers out there who could help you figure it out within hours – if not minutes.

Sample code:

const sum = require(./sum’);

test(adds 1 + 2 to equal 3, () => {
  expect(sum(1, 2)).toBe(3);


  • Performance. For smaller projects, you might not worry about this too much initially, but increased performance is great for larger projects wanting to continuously deploy their app throughout the day
  • Compatibility. Whilst developers primarily use Jest to test React applications, Jest can easily integrate into other applications, allowing you to use its more unique features elsewhere. It’s compatible with Angular, Node, Vue and other babel-based projects.
  • Auto Mocking. When you import your libs in your test files, Jest auto-mocks those libraries to help you work with them more easily and avoid boilerplate.
  • Extended API. Unlike other libraries on the list, Jest comes with a wide API, not requiring you to include additional libraries unless you really need to.
  • Timer Mocks. Jest features a Time mocking system which is great for fast-forwarding timeouts in the app and helps save time when your tests are run.
  • Active development & Community. Jest improves with every update, and as mentioned before, has the most active community, which helps you reach solutions fast when you’re most in need.


  • Slow runtime due to auto-mocking. While auto-mocking has been considered an advantage in the past, it could also turn into a negative from a testing standpoint, since it auto-wraps all your libraries and therefore makes your tests run a bit slower.



The second most-used library, Mocha is only a test framework and provides developers with just the base test structure. Mocha was designed originally to work with Node.js, but today works on a large range of frameworks – including front-end frameworks such as React, Vue, and Angular, as long as you are willing to get your hands dirty with a bit of config.

It does not support assertions, spies and mocks out of the box but it’s possible to add these functionalities with add-ons/plugins. The most popular choice of assertion library to pair with Mocha is Chai, but other options are Assert, Should.js and Better-assert.

Sample code:

var assert = require(assert);
describe(Array, function () {
  describe(‘#indexOf(), function () {
    it(should return -1 when the value is not present, function () {
      assert.equal([1, 2, 3].indexOf(4), -1);


  • Lightweight and Simple. For smaller projects which don’t include sophisticated assertions or testing logic, Mocha is a simple solution.
  • Flexible Configuration. If you want flexible configuration, including your preferred libraries, then the additional set-up and configuration of Mocha is something you definitely need to check out.
  • ES module Support. Mocha supports writing your tests as ES modules, and not just using CommonJS. (Using import along with using require)


  • Harder to set up. You have to include additional libraries for assertions, and this does mean that it’s a little harder to set up than others. That said, set-up is generally a one-time deal, but it’s nice to be able to refer to a “single source of truth” (documentation) instead of jumping all over the show.
  • Potential inconsistency with plugins. Mocha includes the test structure as globals, saving time by not having to include or require it in every file. The downside is that plugins just might require you to include these anyway, leading to inconsistencies, and if you’re a perfectionist it can be frustrating!
  • Weaker documentation. Reportedly, Mocha’s documentation is not its strength.
  • No support for Arbitrary transpiler. Up until v6.0.0, Mocha had a feature which allowed you to use an arbitrary transpiler like coffee-script etc, but it’s now deprecated.


Unlike other JS testing frameworks, Storybook is more of a UI testing tool. It provides an isolated environment for testing components. Stories make it easy to explore a component in all its variations, regardless of its complexity. That means stories are a practical starting point for your UI testing strategy. You already write stories as a natural part of UI development, testing those stories is an easy way to prevent UI bugs over time. Storybook also comes with tools, test runners, and handy integrations with the larger JavaScript ecosystem to expand your UI test coverage.

There are multiple ways you can use Storybook for UI testing:

  • Visual tests capture a screenshot of every story then compare it against baselines to detect appearance and integration issues
  • Accessibility tests catch usability issues related to visual, hearing, mobility, cognitive, speech, or neurological disabilities
  • Interaction tests verify component functionality by simulating user behavior, firing events, and ensuring that state is updated as expected
  • Snapshot tests detect changes in the rendered markup to surface rendering errors or warnings
  • Import stories in other tests to QA even more UI characteristics

It’s tough to directly compare Storybook to our other testing frameworks, so if this sounds useful for your project, we’d encourage you to do a bit more research of your own.


Cypress works entirely in a real browser (Chrome, Firefox and Edge) without the need for driver binaries. Automated code and application code share the same platform and give you complete control over the application under test. Cypress is best known for its E2E (End to end) testing capability, meaning you can follow a pre-defined user behavior and have this tool report potential differences each time you deploy new code.

Sample code:

describe(Actions, () => {
  beforeEach(() => {

  // https://on.cypress.io/interacting-with-elements

  it(.type()  type into a DOM element, () => {
    // https://on.cypress.io/type
      .type(fake@email.com).should(have.value, fake@email.com)

      // .type() with special character sequences

      // .type() with key modifiers
      .type({alt}{option}) //these are equivalent

      // Delay each keypress by 0.1 sec
      .type(slow.typing@email.com, { delay: 100 })
      .should(have.value, slow.typing@email.com)

      // Ignore error checking prior to type
      // like whether the input is visible or disabled
      .type(disabled error checking, { force: true })
      .should(have.value, disabled error checking)

  it(.focus()  focus on a DOM element, () => {
    // https://on.cypress.io/focus
      .should(have.class, focus)
      .prev().should(have.attr, style, color: orange;)


  • E2E Testing. Since Cypress is run in a real browser it can be relied on for end to end user testing.
  • Timeline Snapshot Testing. At the time of execution, Cypress takes a snapshot of that moment and will allow the developer or QA tester to review what happened at a particular step
  • Steady and Dependable. Cypress gives a steady and dependable test execution result compared to other js testing frameworks.
  • Documentation. From zero to run, Cypress contains all the necessary information to get you up to speed. It also has a thriving community.
  • Fast. Test execution is fast in Cypress, with a response time of less than 20ms.


  • No multi-browser. Cypress can only run tests in a single browser.



Jasmine is a popular testing framework which is famously used as a behavior-driven development (BDD) tool. BDD involves writing tests before you write the actual code (as opposed to Test driven development (TDD)). Although Jasmine isn’t just for writing JS tests, and can also be used by other programming languages like Ruby (via Jasmine-gem) or Python (via Jsmin-py), it has lost some of its popularity over the years. It’s DOM-less, which means it does not rely on browsers to run.

Sample Code:

describe(helloWorld, () => {
    it(returns hello world, () => {
      var actual = helloWorld();
      expect(actual).toBe(hello world);


  • Straightforward API. It provides a clean and easy-to-understand syntax, and also a rich and straightforward API for writing your unit tests.
  • Batteries also included. There’s no need for additional assertion or mocking libraries — Jasmine has them all out of the box. Fast. Since it doesn’t rely on any external libraries it’s relatively fast.


  • Polluting global environment. It creates test Globals (keywords like “describe” or “test”) by default, so that you don’t have to import them in your tests. This may become a downside in specific scenarios.
  • Challenging async testing. Testing asynchronous functions is a bit hard with Jasmine.



Puppeteer is a Node library developed by Chrome’s Development Team. It’s a framework for test execution, that enables users to control a headless Chrome. Everything you can do manually in a browser can be also done with Puppeteer.

Most people use this tool to perform several different tests on web applications like: - Generating screenshots and PDFs out of web pages - Crawling webpages - Automating UI testing, keyboard input simulation and form submissions - Testing chrome extensions

Since Puppeteer is a headless but full-fledged browser, it’s an ideal choice for testing UI of Single Page Applications (SPAs).

Sample Code:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('https://example.com');
  await page.screenshot({ path: 'example.png' });

  await browser.close();

Testing Library [React]

React Testing library is not a test runner like Jest. In fact, they can work in tandem. Testing Library is a set of tools and functions which help you access DOM and perform actions on them, ie rendering components into Virtual DOM, searching and interacting with it.

In some ways, Jest and other traditional testing frameworks aren’t comparable with Testing Library. You need Jest in order to be able to collect all the test files with.test.js extension, run each, and show pass/failed results. It’s more accurate to compare Testing Library with Enzym or Cypress.

Sample code:

import React, {useEffect} from react
import ReactDOM from react-dom
import {render, fireEvent} from ‘@testing-library/react’

const modalRoot = document.createElement(div)
modalRoot.setAttribute(id, modal-root)

const Modal = ({onClose, children}) => {
  const el = document.createElement(div)

  useEffect(() => {

    return () => modalRoot.removeChild(el)

  return ReactDOM.createPortal(
    <div onClick={onClose}>
      <div onClick={e => e.stopPropagation()}>
        <hr />

        <button onClick={onClose}>Close</button>




test(modal shows the children and a close button, () => {
  // Arrange
  const handleClose = jest.fn()

  // Act
  const {getByText} = render(
    <Modal onClose={handleClose}>


  // Assert

  // Act

  // Assert


  • Recommended by React Team. You can find references and recommendations for using this library in React’s documentation.
  • Lightweight. Since it’s specifically written for testing React apps/components.
  • Community. Testing Library is getting some really good traction recently. As a matter of fact, stackoverflow.com stats are showing the volume of questions about Testing Library has outgrown enzyme. img


  • No Shallow Rendering. It doesn’t provide a way to “shallowly” render your component without its children, but you can achieve this by “mocking” features of testing frameworks like Jest.


WD IO-logo

Webdriver IO is an automation framework for Web and mobile applications. It helps you create a concrete, scalable and stable test suite. The main difference between WebdriverIO and the others we went over earlier in this article is that it can be used to test hybrid, native mobile applications, and native desktop applications along with Web applications. The whole package! WebdriverIO leverages the power of the WebDriver protocol that is developed and supported by all browser vendors and guarantees a true cross-browser testing experience. It relies on a common automation standard that is properly tested and ensures compatibility for the future. This standard allows WebdriverIO to work with any browser, regardless of how it is configured or used.

Sample Code:

it('can handle commands using async/await', async function () {
    const inputElement = await $('#input')
    let value = await inputElement.getValue()
    console.log(value) // outputs: some value


  • Multi-Platform. It enables running tests on desktop as well as on mobile.
  • Compatibility. Works with many assertion libraries and testing frameworks (Jasmine, Mocha, Cucumber)
  • Simple and fast. For the all the above reasons.


  • Hard to debug. Debugging is possible only through WDIO task runner
  • Documentation. At the time of writing, doesn’t have the best documentation and some APIs are missing.


Playwright logo

Playwright is another automation framework that works best for E2E testing. This framework is built and maintained by Microsoft and aims to run across the major browser engines – Chromium, Webkit, and Firefox.

It’s actually a fork of an earlier project, Puppeteer (which we went over above). The main difference is that Playwright is specifically written for making E2E tests by developers and testers. Playwright can also be used with major CI/CD servers like TravisCI, CircleCI, Jenkins, Appveyor, GitHub Actions, etc.

Sample Code:

import { test, expect } from '@playwright/test';

test('my test', async ({ page }) => {
  await page.goto('https://playwright.dev/');

  // Expect a title "to contain" a substring.
  await expect(page).toHaveTitle(/Playwright/);

  // Expect an attribute "to be strictly equal" to the value.
  await expect(page.locator('text=Get Started').first()).toHaveAttribute('href', '/docs/intro');

  await page.click('text=Get Started');
  // Expect some text to be visible on the page.
  await expect(page.locator('text=Introduction').first()).toBeVisible();


  • Backed up by a trusted company. It’s maintained and supported by Microsoft.
  • Multi-Language. Playwright supports multiple languages such as JavaScript, Java, Python, and .NET C#
  • Multiple Test Runner Support: Can be used by Mocha, Jest and Jasmine.
  • Cross-browser. The main goal for this framework is to support all major browsers.
  • Emulating and Native Events Support. Can emulate mobile devices, geolocation, and permissions. Also tapping into native input events for mouse and keyboard is supported.


  • Still early-stage. It’s fairly new and the support from the community is limited
  • No real device support: Doesn’t support real devices for mobile browser tests but supports emulators.



A minimalistic testing runner, AVA takes advantage of JavaScript’s async nature and runs tests concurrently, which, in turn, increases performance.

AVA doesn’t create any Globals for you, so you can control more easily what you use. This can bring extra clarity to tests, ensuring that you know exactly what is happening.

Sample Code:

import test from 'ava';

test('foo', t => {

test('bar', async t => {
	const bar = Promise.resolve('bar');
	t.is(await bar, 'bar');


  • Runs tests concurrently. Taking advantage of the async nature of JavaScript makes testing extremely beneficial. The main benefit is minimizing the wait time between deployments
  • Simple API. Contains a simple API that provides only what you need
  • Snapshot testing. Provided via jest-snapshot, which is great when you want to know when your application’s UI changes unexpectedly.
  • Tap Reporter: Ava shows a human readable report by default but it’s nice to get a report in TAP format!


  • No test grouping. There’s no way in Ava to group similar tests together.
  • No built-in mocking. Ava is not shipped with mocking but you can still use third-party libraries for that (like Sinon.js).


Made by the team behind Vite, Vitest is a native test runner which provides a compatible API that allows developers to use it as a drop-in replacement of Jest in most projects.

Vitest cares a lot about performance and uses Worker threads to run as much as possible in parallel, bringing the best developer experience. It stays lightweight by carefully choosing its dependencies (or directly inlining needed pieces).

This framework aims to position itself as the Test Runner of choice for Vite projects, and as a solid alternative even for projects not using Vite. The team has provided a comparison page which shows you the differences between this tool and the most-used test runners like Jest or Cypress.

Sample code:

import { assert, expect, test } from 'vitest'

// Edit an assertion and save to see HMR in action

test('Math.sqrt()', () => {

test('JSON', () => {
  const input = {
    foo: 'hello',
    bar: 'world',

  const output = JSON.stringify(input)

  assert.deepEqual(JSON.parse(output), input, 'matches original')


  • Native ESM support. That means you can benefit from native browser support for importing ESM modules.
  • Typescript support. Vitest supports both Typescript and JSX out of the box.
  • Multi-threaded. It brings you the best DX with workers multi-threading via tinypool.
  • In-source testing. Vitest provides a way to run tests within your source code along with the implementation, similar to Rust’s module tests.


  • In early adoption phase. Although the team behind Vitest has done a tremendous job on creating this tool, it’s still young and the community support may not be what you’re hoping for.

Bonus overview

Here’s a comparison table consisting of common characteristics of all of these frameworks, to help you figure out which is the best for your specific testing case:

Table of testing framework features

Which JavaScript testing framework should I use?

After looking at just a few of the many frameworks out there, it’s pretty clear that choosing a framework isn’t black and white.

Most frameworks (Mocha being the exception) provide the essentials, which is a testing environment along with the mechanisms to ensure that given X, Y is always returned, with a few simply giving you more “bells and whistles.” So for the basics, you should feel pretty confident in choosing any of them, with any further preference depending on what you and your particular project want and need.

If you’re new to the game and need a lot of help getting up to speed, you should choose frameworks with strong communities like Jest. If you require a broad API along with specific (perhaps unique) features then Mocha is a smart choice, as the extensibility is there. And if your tests are mostly E2E, then you can choose between Cypress, Puppeteer or Playwright.

Please also note that if a library is new and doesn’t have a strong community at the moment, it doesn’t mean this will always be the case. Be sure to check them out from time to time. The JavaScript world is changing every day! Here’s hoping this article helps you in choosing the perfect JavaScript unit testing framework in 2022 and beyond.

A testing framework can help you improve your software quality and catch errors early. To help ensure your JavaScript project is error-free, add Raygun Error Monitoring to your testing - and production - environments. Try it for free!

Link to comment
Share on other sites

Just now, wtf said:

Data analytics certification reference book unda bro?

no bro...

certification details like exam code..post cheyyi..if i come across..i will share...

Link to comment
Share on other sites

11 hours ago, wtf said:


AWS Certified Data Analytics Speciality DAS-C01 


AWS Certified Data Analytics — Specialty



#SatyenKumar tips to get AWS Certified on Data Analytics

Recently I updated about passingthe AWS Data Analytics Specialty. With this quick note, I am sharing useful information for those who are looking the Data Analytics Certification.

This is official Learning path of the certification.



#SatyenKumar sharing on #medium and #Linkedin on #AWSCertified #Data #Analytics

How can you quickly certify on AWS Data Analytics — Specialty?

As being specialty, the exam is not easy one. However, I have gathered the suggestions to help in achieving it successfully.

You can follow following material. It is free.

  1. What is exam about? Exam Guide(PDF)
  2. Fundamentals of Data Analytics on AWS (3.5 hours) >> Access Here
  3. Exam Readiness Guide (3.5 Hours) : Access
  4. My YouTube playlist : BEST & sufficient to clear (20 video) >> Here
  5. AWS Whitepapers

Amazon EMR Migration Guide: How to Move Apache Spark and Apache Hadoop From On-Premises to AWS | Big Data Options on AWS | Lambda Architecture for Batch and Stream Processing | Streaming Data Solutions on AWS with Amazon Kinesis | Teaching Big Data Skills with Amazon EMR | Reference Architecture: SQL Based Data Processing in Amazon ECS

Link to comment
Share on other sites

On 10/10/2022 at 9:01 PM, fasak_vachadu said:

Avarina interview ki preparation kosam e course tesuku kunara 

from interview kick start nuchi course tesukunara ??


asalu ela prepare avutunaru Meeru andaru to get good position 

or full time jobs 

I am in qa automation domain 

@csrcsr @viky. @Spartan @dasari4kntr

@fasak_vachadu certifications ki prepare avtuna, nadhi BI data warehousing side, architect certifications ani cloud environments vi rasadam ani decide aya to better understand, python nerchundam ani deeplythinking, SQL mid level lo unna.

Link to comment
Share on other sites

The Architecture of a Modern Startup

Hype wave, pragmatic evidence vs the need to move fast


0*Ps0lgic20e019ANd.png workflow — all images by author

The Tech side of startups can sometimes be very fluid and contain a lot of unknowns. What tech stack to use? Which components might be overkill for now but worth keeping an eye on in the future? How to balance the pace of business features development while keeping the quality bar high enough to have a maintainable codebase?

Here I want to share our experience building https://cleanbee.syzygy-ai.com/from the ground up — how we shaped our processes based on needs and how our processes evolved as we extended our tech stack with new components.

Businesses want to conquer the market and engineers — try cool stuff and stretch their brains. Meanwhile, the industry produces new languages, frameworks, and libraries in such quantities that you will not be able to check them all. And, usually, if you scratch the shiny surface of the Next Big Thing, you will find a good old concept. Good, if you are lucky.

One of the most exciting topics to argue about is the processes — whether you rely on trunk-based development, prefer a more monstrous GitHub flow, are a fan of mobbing, or find it more efficient to spend time in PR-based code reviews.

I have experience working in an environment where artifacts were thrown away on users without any standardized process. In case of issues, developers had a lot of fun (nope!) trying to figure out what version of components was actually deployed.

On another spectrum is the never-ending queue to CI. After you create PR you have to entertain yourself in the nearest 30 minutes by betting whether the CI cluster will find a resource to run tests over your changes. Sometimes the platform team introduces new, exciting, and useful features that might break compatibility with existing boilerplate for CI. These may result in failing all your checks at the last minute, after an hour of waiting.

I strongly believe that, as usual, it all depends on team maturity, the kind of software you are building, and various business constraints, for example, the existence of error’s budget and the importance of time-to-market versus SLXs.

I think what is important is to have some agreed processes in place that everyone is aware of and follows. It’s also important to have the bravery to challenge and change it if there is evidence of a better alternative.

Start Shaping the Process

What we have at the start:

  • less than a dozen developers — in-house team and temporary contractors who want to and can work asynchronously
  • completely greenfield project — no single line of code has been written yet. Requirements are vague, but they already started shaping into something
  • tech-wise — the clear need for a backend that should talk with mobile clients
  • some simple web frontend — static pages should be enough! (nope)

We have started simple — code at GitHub and a PR-based flow with a single requirement — to have tickets splittable to be delivered in 1–3 days. This required some practice of story slicing, and it seems that a sense of visible fast progress is shown through the ability to move tickets to Done. This can be a great motivational factor for the team to onboard that idea.

Linters and static analyzers to skip exciting discussions, such as how many arguments per method are too much (6!). We’ll gradually add auto-tests. We also try codesense. They have a very promising approach to highlighting important parts of code (those bits that changed frequently and should definitely have a higher maintainability bar!) and identifying complexity by looking at the level of nestness in the code. It is probably a bit expensive for startups initially, but 100% provides decent hints for engineers.

On the architecture side of things, there was a temptation to dive deep into the wonderland of microservices. But looking at horrifying diagrams of connections between them from big players, the need to trace requests between them, it really seems a suicidal approach for teams in the early stage that want to move fast.

Analysis of requirements allow us to detect three groups of job:

  • core API with usual CRUD-like activities
  • search and recommendations
  • temporary workload that does something useful according to schedule (almost at a time with casual delays is OK)


Choice of tech stack: situations when the time is a bit limited, and expectations are high. Use what you know and master(yeah, maybe for someone, it is boring technology). Hence, Fastapi, REST, stateless, Python, redis, and Postgres are our best friends (Yeah, we like Go and Rust, but we need to pay our dues a bit more!).

With mobile clients, the situation was a bit different. We foresaw a lot of screens with states and interactions with remote services but not too much custom, platform-specific tweaking. Hence, the idea of having a single codebase for iOS and Android was very appealing.

Nowadays, the choice of frameworks is really wide, but again, due to some experience with Flutter, we decided to give it a go. Within mobile development, one of the important aspects to better decide on is state management. Here, you will have a nice abundance of acronyms to be puzzled about from various languages and frameworks. Some include MVC, MVVM, VIPER, TCA, RIBs, BLOC, etc.

Our motto starts with the most simple (*) solutions sufficient to support the necessary functionality. (*) Simple. Well, let’s put it this way, we think we understand it.

However, we definitely made a mistake after building MVP because we decided to build on top instead of throwing it away. Hence, on one wonderful (nope!) sunny day, I questioned my sanity: after I commented out code, cleaned all possible caches, and still didn’t see my changes on a new screen. Yeah, the dead code should be removed!

Start Building!

After those initial formalities were settled, the next necessary thing was to be able to check client-server interactions.

API contract is a great thing, but it will be much more obvious that something is wrong when a real server throws you a “schema validation error” or miserably fails with an HTTP 500 error code.

Backend services were initially split into two groups — API monolith and Search and Recommender. The first contains more or less straightforward logic to interact with DB, and the second contains CPU-intensive computations that might require specific hardware configuration. Every service has its own scalability group.

As we were still thinking about the rollout strategy (and arguing which domain to buy), the solution was simple: to minimize the struggles of mobile engineers in dealing with the backend, i.e., the alien stack. Let’s pack everything into docker.

When we prepare everything to be deployable locally — mobile engineers can run docker-compose commands and have everything ready (after a few painful attempts that reveal flaws in documentation, but the real value of such exercises is to react to every “WTF!” and improve it).

`Everything` is good, but what is the point of an API running on top of an empty DB? Manually entering necessary data shortly leads to depression (and the risk of increasing the duration of development cycles). Hence, we prepared a curated dataset that was inserted into the local DB to be able to play with. We also started using it for auto-tests. Win-win! Auth becomes less problematic in defining testing scenarios when you have dozens of dummy users with similar passwords!

Try new things or choose third-party providers

Dealing with new technology is always a bit dangerous. You and your team can’t know everything (and sometimes things that you think you know can full you, but that’s another story). And still, it is often required to assess and investigate something no one has touched.

Payments, email, chat, SMS, notifications, analytics, etc. Every modern application usually represents business logics glued with several third-party providers.

Our approach to choosing with whom we work — time-capped, try-to-build-with-it activities to try the most promising one chosen by features, supported languages, and, in the case of providers, pricing.

How did we get into Terraform?

The backend, a part of the DB, also should have some object/file storage. Sooner or later, we also should have DNS so that our services are ready to play with the big cruel world.

The choice of cloud provider was purely based on existing expertise within the team. We already use AWS for other projects, so we decided to stick with it. For sure, it is possible to do everything in the AWS console, but as times go, things become a classic big ball of mud that everyone is terrified to touch, and no one remembers why this bit exists at all.

OK, seems the paradigm of infrastructure as code can be handy here.

Tooling-wise, choices are not so big — vendor-specific (AWS Cloud formation, Google Cloud (Deployment Manager, Azure Automation), terraform, and its rivals.

Based on experience with terraform… you already got an idea of how we choose things?

Yeah, the initial setup will take some time (and without control can easily become the same big ball of mud in TF as well), but at least it will have some documentation over infrastructure and visibility of WHY it is there. Another major advantage, whatever you manage through TF, will be updated automatically (well, when you or CI/CD run corresponding commands)

Secrets management

For AWS itself, given we run everything inside AWS, we can rely on IAM and assumed roles by attaching necessary policies to our VMs. But we need integration with third-party services and some way to pass some secrets to our apps, for example, passwords for DB. We need some solution for secret management. AWS has KMS, GitHub actions have their own secrets, and apart from it, there are a bunch of other providers. So, the real question is: what do you need from secret management:

  • audit
  • path-based access
  • integration with Kubernetes
  • ability to issue temporary credentials
  • Web UI
  • Free
  • secrets versioning

KMS was very handy, and we managed to add it into GitHub actions, but the UI of vault and the ability to use it for free (if you run it by yourself) was a kind of deal breaker on this matter.

Path to Kubernetes

And once we have dockerized the app, we have started considering Kubernetes as it offers a few goodies out of the box. The most important one is to be able to spin up the necessary amount of pods to meet performance demands and the ability to define all your needs in a declarative fashion. So, given a sufficient level of automation, no human being should run kubectl apply. AWS has EKS to start with that can be managed via terraform.

On the other hand, a steep learning curve (to grasp the idea that it is exactly defined what should be up and running) and a bit of specific tooling to play with were fair reasons to think about it twice.

Helm charts

If we talk Kubernetes and already have docker apps released on every merge to main, helm charts become the next step in adapting modern infrastructure stack. We have plugged AWS ECR to keep track of every new release and publish helm charts in dedicated S3 buckets that become our internal helm chart registry.

Plugging it all together was not as straightforward as expected. Kubernetes nodes initially can’t connect to ECR and pull necessary docker images, terraform modules (aws-ssm-operator) intended to work with secrets in AWS KMS were deprecated and didn’t support recent Kubernetes API, secrets, and config maps weren’t in the mood to be exposed into pods.

The first rollout of services brings happiness to mobile folks — no need to care about instructions for local setup! The initial week or so, though, it was not really stable, but then, one less thing to care about.

Do you need all of it? Not necessary.

I must admit, this mix — Kubernetes with vault via terraform and helm — are probably not for everyone, and you most likely will not need it in the initial stage. Simple docker push to ECR on merge to main and doing ssh into ec2 && docker pull && docker-compose stop-start during release from CICD can work well (at least for a happy path). It will be clear for everyone at first glance. That’s exactly how we redeploy our static websites at the moment. We can focus on the ci build’s new version and copy it into the corresponding s3 bucket.

Maturing the Infrastructure

AWS is nice enough to offer credits for those who are wild enough to explore the shady paths of the startup world. Can we use it to save a few bucks on GitHub minutes and expose fewer secrets and infrastructure to GitHub VMs?

How about self-hosted runners, i.e., when you open a PR, it is not GitHub VMs, but your own Kubernetes that’s allocated a pod to run your CI checks? Sure, it is difficult to prepare everything for iOS releases (more about it below), but Android and the backend should surely work on good old Linux?!

We have built it via dedicated k8s pods, but there is also an option to run checks on on-spot EC2 instances.

Observability and Co

There is a lot of marketing fluff around terms like monitoring and alerting.

In some companies, those things are implemented just for bragging, “We have X for that!”. However, engineers are still blind to what is happening with their production when there are real issues or alerts channels have to be muted as it contains nonactionable noise.

And I must say, we still have a looong way to go.

The first thing you will find as soon as you search for that kind of solution is ELK stack and a bunch of paid providers. After measuring the time and effort to maintain our own setup, I started thinking a paid solution might be worth it. If and only if you really can delegate the burden of squeezing the most important info about your apps and state of infrastructure to existing solutions. It all depends on whether they have a preset of metrics, log parsers, and index mapping that you can easily adapt to your project.

For logging, currently, we rely on ELK. Yeah, it is more or less straightforward to set up, and most likely, some people find the query language of elastic very convenient to use daily.

Here we are still exploring options, as it seems that old good kubectl logs with grep produce insights for questions like “What is the last error from app1 pods ” in a much more timely fashion without being lost among endless UI controls. But most probably, the UI of Kibana still hides the levers we should pull to add a proper ingestion pipeline and choose the corresponding mapping for the elastic index for filebeat.

For alerting, we set up prometheus and integrate it into Slack. Again, mainly because we have experienced it before.

Now, why do we need Azure?!

As usually happens when products evolve, new requirements introduce new kinds of things:

  • now, part of having something publicly visible, we need some resources available for the team only
  • to manage feature flags, access vault UI, or struggle with elastic to figure out the last API error

Sure, there are paid solutions for that, or you can mix some Identity as a service provider (Azure active directory) for authentication of your teammates with any VPN providers. We chose OpenVPNdue to their free tiers and expose necessary services to the internal network only so those who should log in using their credentials. It has one clear advantage compared to using AWS stack — it is free (for a limited number of connections).

OK, why do we need Google Cloud?

So far, we have mainly discussed the backend part of things. But there are more. The thing that you see first — mobile apps! Flutter or something else must also be built, linted, and tested. And published somehow somewhere, so stakeholders can immediately be in awe of the new features (and find new bugs).

For rolling out to the production, you would need to pass through a bunch of formalities (screenshots, change log = what’s new, review) that will delay your audience from enjoying those art pieces.

I must say that the API of stores is not really friendly for frequent release. When you build and sign an app, publishing can take 15+ min. As with every other API, the API of app stores may and will fail sooner or later. Yes, and signing might be a nightmare as it differs between platforms. And it would be nice if engineers didn’t waste time on all of those things preparing releases from their laptops.

The first (and probably single?) thing that you should consider is fastlane — initially, I did have some prejudice with all those new terms like gems (like that name, though!) and bundle, but it works. To run them from CI, some efforts will be required to deal with secrets jks for Android or match for iOS.

Towards the “dark” side

Next, you will start thinking about app distribution: testflight is a handy tool for the iOS world, but what about Android? We ended up using App Distribution — a solution from Firebase — mainly because it worked for us after the first try. But there are other options (that claim to be working for both platforms).

What is important is that you can do everything from fastlane! Even when your app evolves and you start adding various extras — analytics, chats, maps, geo — many were from Google directly of Firebase. As Firebase offers many goodies, it was natural to collect analytical events, and after a few tweaks with their IAM policy, we set up the export of raw events into gs-buckets to be able to play with BigQuery.

Prod vs Staging — The Great Split!



For backend, we have auto-tests right from the start. Various practices like test double prove quite efficient to prevent regressions even in complex business logic with integrations from side services. On the mobile side, we were a bit limited due to the coexistence of code from MVP, and auto-tests were not so helpful for complex business scenarios like someone wanting to use our services but we couldn’t charge their bank card.

Manual testing was very time-consuming and error-prone, especially when business logic dynamically evolved and the state of data in the database after recent updates became become impossible from the point of view of domain rules.

Yeah, so it would be nice to run e2e tests by clicking through the app with data that we are maintaining (and sure that it is valid). It would be nice if those tests didn’t pollute the actual database, S3 buckets, and third-party providers.

We started with a single main branch and a single environment (rds, redis, k8s namespace, and s3) used by the first testers and developers. We were not exposed to the public, but as we moved closer and closer to release, it became clear that some kind of distinction is necessary for places where we can break things and have a stable environment.

In mobile applications, changing the API’s URL during building was a matter. On the backend, a few aspects have to be done to support deploy-specific configurations: infrastructure-wise, by creating dedicated policies and resources and parameterized a few bits in the code where specific URLs were expected. Apart from it, there are several repositories, some of them independent but some are dependent — as in cases of shared functionality.

Do you know what will happen when you update shared functionality without immediate redeployment and testing all dependent apps? After a few days, when you completely forget about it, you make some innocent — purely cosmetic changes somewhere else in the dependent repo that will lead to redeployment and pull the latest dependency.

Surely, during an important dem or right after it, you would see some stupid errors related to a lack of compatibility for a single condition that you forgot to double-check.

  1. So, the first important consideration for splitting the environment — automate the overall rollout of all dependent applications if some base repo was updated. You may ask the team to do it, and everyone agrees but forget to run the pull.
  2. The second aspect — what do we need to deploy? Do we need to maintain all apps in every environment, including temporary jobs responsible for sending emails or notifications? Seems some flags to include or exclude jobs into deployment might be helpful.
  3. E2E, and later, probably Staging not necessary, should be reachable by everyone on the internet.
  4. Promoting new releases to e2e and staging have to be automated
  5. Promoting new releases to prod, at least now, better have controlled and manual

Currently, we have three ends, which fulfill all the things above:

  • E2E — an environment where integration tests will be run on curated data to ensure base functionality is still in place
  • Staging — where core development is happening and where beta testers can try to break what we build
  • Prod — that happy to greet new users

The Kubernetes cluster is still a single one. Everything was split on the namespace level. A similar thing happened with RDS, where several databases co-living together in an RDS instance.




On the side of automation of mobile testing, the choice is not really big. You first have to choose whether you will use any device-in-the-cloud provider or run tests by yourself.

You can certainly plug a smartphone into a laptop and run tests, but wouldn’t it be nice (and right!) if CI did it instead? When you start considering vendors that provide emulators and real devices to play with, you will find the choice of testing framework for mobile is not wide, but that the second choice you have to make (and the choice of provider might limit you here). Another important consideration — is there specific hardware requirements, i.e., using GPU or npu? Hence, any emulator was sufficient for us.

We identify two main options for the mobile e2e testing framework — Flutter integration tests and appium-based pytests. Firebase Test Lab supported Flutter integration tests, although it required some tweaking to allow requests from their IP ranges (VM with running emulators) to reach our E2E API.

Appium, part of the Python API, was very promising, as it was like using something like a testproject (you guys rock!). You can record all the clicks through the application per scenario. Hence, it doesn’t require specific programming knowledge, but it allows you to learn it gradually). So far, Appium is much more comprehensive in our setup regarding scenario coverage.

E2E tests have one tiny (nope!) issue — the cold start of the app in an emulator is not very fast. If we add the time necessary to build the app and the duration of copying the debug build to the provider on top of it, it becomes a real bottleneck of moving fast.

So far, we have experimented with running them twice in a day, but let’s see how it is going.

What’s Next?

Many interesting tasks are still on our todo list:

  • On the infrastructure side — performance testing, security testing, trying out Flutter for web
  • On the development side — serving and updating ML models for recommendation engine, prediction of cleaning duration, building cache with feature vector for recommendations, intermixing optimisation problems to match the engine, and scheduling jobs and game theory

And most important, nothing can replace real-world usage.

You’ll see many wild things only when you start collecting real data about user behavior, so we are looking forward to the upcoming launch!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...