Nullius in verba.
— Royal Society, 1660
Nullius in verba.
— Royal Society, 1660
12.9× slower — and that’s the easy part Two loops over the same array. Same data. Same sum operation. One walks the array sequentially; the other uses a random permutation for indirection. BenchmarkDotNet says SumRandom is 12.88× slower at one million elements. No surprise — random memory access is slower. Everyone knows that. But how much slower will it get when the dataset grows 64×? ...
3% slower. Ship it. Two filter variants over 20 million integers. Five benchmark iterations. FilterTernary: 26.11 ms. FilterBranch: 25.30 ms. The ternary is 3% slower. PR description writes itself. Merge. Deploy. Next day, rollback. Regression in production — on hardware where the difference vanishes, on data where it reverses. ...
p99 = 1 ms — flip one switch — p99 = 195 ms Same service. Same pause pattern. Same nominal target rate. One change in the client model — p99 jumps 182×. Not a system failure. A measurement failure. ...
Same engine, different answers Design fixed. Environment changed: cache temperature, GC pressure, data order, JIT tier. The numbers move by 2–6× without touching the algorithm. ...
27.2M ops/sec — and a lie Same two classes. Same data. Same machine. One benchmark says Dictionary + lock is 2× faster. Another says ConcurrentDictionary is 17× faster. A third says it doesn’t matter — fsync buries the difference in noise. Same optimization — three verdicts. ...