Testing Mobile Apps: Emulators, Simulators, and Real Devices
You just finished building your mobile app. It runs perfectly on your laptop, the build signed and ready. You feel good about shipping it to the store. But here is the uncomfortable truth: your laptop is not your user's phone. Not even close.
Users run your app on hundreds of different devices. Different screen sizes, OS versions, battery levels, network conditions, and hardware configurations. A bug that never shows up on your development machine can crash the app on a specific phone model. One bad release can tank your store rating and lose users overnight.
Testing mobile apps is not optional. But how you test, and where you run those tests, makes a big difference in how much confidence you actually get before shipping.
The Testing Layers That Matter
Mobile testing is not one thing. It is a stack of layers, each catching different kinds of problems at different speeds.
Unit tests sit at the bottom. They verify that a single piece of behavior works correctly. In mobile terms, this means testing from a meaningful entry point: does the ViewModel react correctly when the user taps a button? Does the use case return the right output for a given input? You are not testing internal implementation details. You are testing observable behavior. Unit tests run fast, often in milliseconds. You can trigger them on every code change in your pipeline using an emulator or simulator. No physical device needed.
Integration tests sit one level up. They check how components work together. Does the data from the API actually display correctly on the screen? Does local storage work after the user logs in? These tests need an environment that resembles a real device. Emulators, simulators, or actual devices all work here, depending on what you are integrating.
UI tests sit at the top. They simulate real user interactions: tapping buttons, filling forms, scrolling lists, and verifying that the expected elements appear on screen. These tests are the closest to what your users actually experience. They are also the slowest and most fragile. A small layout change can break them. But when they pass, you know the app works from the user's perspective.
Each layer has its place. Unit tests give you fast feedback. Integration tests catch wiring problems. UI tests validate the experience. You need all three, but you do not need to run all of them on every trigger.
Emulators and Simulators: Fast but Not Perfect
Emulators (Android) and simulators (iOS) are the workhorses of mobile testing. They are free, easy to spin up in a CI/CD pipeline, and good enough for most logic and layout checks.
You can run unit tests, integration tests, and even UI tests on them. They boot quickly, support different OS versions, and let you simulate various screen sizes. For daily development and internal builds, they are usually sufficient.
But they have a blind spot. Emulators and simulators do not reproduce real device behavior. Performance is different. Battery consumption is different. Sensor responses are different. Network behavior on cellular is different. A test that passes perfectly on an emulator can fail on a physical device because of timing, memory pressure, or hardware-specific quirks.
To make this practical, here is a GitHub Actions job that creates an Android emulator, waits for it to boot, runs instrumented tests, and collects the results:
name: Android Instrumented Tests
on: [push]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up JDK 17
uses: actions/setup-java@v4
with:
java-version: '17'
distribution: 'temurin'
- name: Create and start emulator
run: |
echo "no" | avdmanager create avd -n testDevice -k "system-images;android-33;google_apis;x86_64" --force
$ANDROID_HOME/emulator/emulator -avd testDevice -no-window -no-audio &
- name: Wait for emulator to boot
run: |
adb wait-for-device
adb shell settings put global window_animation_scale 0.0
adb shell settings put global transition_animation_scale 0.0
adb shell settings put global animator_duration_scale 0.0
- name: Run instrumented tests
run: ./gradlew connectedCheck
- name: Upload test results
if: always()
uses: actions/upload-artifact@v4
with:
name: test-results
path: app/build/reports/androidTests/connected/
If you only test on emulators, you are shipping with partial information.
Device Farms: Real Hardware at Scale
This is where device farms come in. A device farm is a service that gives you access to real phones and tablets sitting in a data center. You upload your app, run your tests across dozens of devices simultaneously, and get a report showing what passed and what broke on each device.
Popular options include Firebase Test Lab for both Android and iOS, and AWS Device Farm. These services support different device models, OS versions, and screen sizes. You can integrate them directly into your CI/CD pipeline. Every time a build completes, the pipeline triggers tests on real hardware before the app ever reaches the store.
Device farms catch problems that emulators miss. Crashes on specific hardware, layout issues on unusual screen ratios, performance degradation on older devices. They also give you screenshots and logs from each test run, making it easier to diagnose failures.
But device farms are not free. They cost money per test run, and the tests take longer to complete than emulator tests. You do not want to run every single build through a device farm. That would be slow and expensive.
When to Use What
The decision is practical, not ideological. Use emulators and simulators for fast feedback during development and for internal builds. Use device farms strategically before releases.
The following flowchart summarizes the recommended environment for each test type based on speed and fidelity needs:
Here is a simple rule of thumb:
- Every commit: run unit tests on emulator or simulator.
- Every pull request: run unit tests plus integration tests on emulator or simulator.
- Before a staged rollout or phased release: run the full test suite on a device farm, covering a representative set of devices.
The key is to match the testing intensity to the risk. A small bug fix in an internal feature does not need a full device farm run. A release that will reach thousands of users absolutely does.
A Practical Checklist Before Store Submission
Before you hit the submit button on your app store dashboard, run through this quick checklist:
- Unit tests pass on emulator or simulator for the target OS versions.
- Integration tests pass for the main user flows (login, data display, navigation).
- UI tests pass on at least one emulator and one physical device.
- Device farm tests pass for the top 5-10 device models your analytics show users actually have.
- No crash or ANR (application not responding) reports from the device farm run.
- Screenshots from device farm match expected layouts across different screen sizes.
This checklist is not exhaustive, but it catches the most common issues that slip through to production.
The Concrete Takeaway
Testing on emulators alone is like test-driving a car in a parking lot. It tells you the steering works, but it does not tell you how the car handles on the highway. Device farms give you highway data. Use emulators for speed and device farms for confidence. Automate both in your pipeline, and you will ship fewer bugs and sleep better at night.