• Stars
    star
    2,989
  • Rank 15,133 (Top 0.3 %)
  • Language
    Dart
  • License
    MIT License
  • Created over 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Respository for the I/O 2019 demo: Become a tech lead, slay bugs, and don't get fired.

Developer Quest

Become a tech lead, slay bugs, and don't get fired.

All in Flutter.

Research tree

The game progression is based on a "research tree" of tasks. The tree is defined in code in lib/src/shared_state/task_tree but for clarity it is also kept as a diagram in assets/docs. Here's the PNG.

The task "research tree"

Performance testing

Attach a real device and run the following command from the root of the repo:

flutter drive --target=test_driver/performance.dart --profile

This will do an automated run-through of the app, and will save the output to files.

  • Look into to build/walkthrough-*.json files for detailed summaries of each run.
  • Look at test_driver/perf_stats.tsv to compare latest runs with historical data.
  • Run Rscript test_driver/generate-graphs.R (assuming you have R installed) to generate boxplots of the latest runs. This will show up as test_driver/*.pdf files.
  • Peruse the raw data file (used by R to generate the boxplots) by opening the durations.tsv file. These files contain build and rasterization times for each frame for every run.

If you want to get several runs at once, you can use something like the following command:

DESC="my change" bash -c 'for i in {1..5}; do flutter drive --target=test_driver/performance.dart --profile; sleep 1; done'

Why run several times when we get so many data points on each walkthrough? With several identical walkthroughs it's possible to visually check variance between runs. Even with box plots, these nuances get lost in the summary stats, so it's hard to see whether a change actually brought any performance improvement or not. Running several times also eliminates the effect of extremely bad luck, like for example when Android decides to update some app while test is running.

Lock CPU and GPU speed for your performance test device

Run the following command when your performance test device is attached via USB.

./tool/lock_android_scaling.sh

WARNING:

Where to store the profiling data

You probably don't want to check the *.tsv output files into the repo. For that, run git update-index --assume-unchanged test_driver/*.tsv in the root dir. This is a one time command per machine.