mitigate google-chrome slow hashing

file hashing became drastically slower in recent chrome versions;

* 748 MiB/s in 131.0.6778.86
* 747 MiB/s in 132.0.6834.160
* 485 MiB/s in 133.0.6943.60
* 319 MiB/s in 134.0.6998.36

the silver lining: it looks like chrome-bug 1352210 is improving
(crypto.subtle, the native hasher, now scales with multiple cores)

* 133.0.6943.60: speed peaked at 2 threads; 341 MiB/s, 485 MiB/s
* 134.0.6998.36: peak at 7; 193, 383, 383, 408, 421, 431, 438, 438
* 137.0.7151.41: peak at 8; 210, 382, 445, 513, 573, 573, 585, 598
   MiB/s when hashing with 1, 2, ..., 7, 8 webworkers respectively
   on a ryzen7-5800x with 2x16g 2133mhz ram

characteristics of versions between v134 and v137 are unknown
(cannot find old official builds to test), but v137 is a good
cutoff for minimizing risk of hitting chrome-bugs

meanwhile, hash-wasm scales linearly up to 8 cores;
0=328 1=377 2=738 3=947 4=1090 5=1190 6=1380 7=1530 8=1810
(0 = wasm on mainthread, no webworkers)

but it looks like chrome-bug 383568268 is making a return,
so keep the limit of max 4 threads if machine has more than
4 cores (and numCores-1 otherwise)
This commit is contained in:
ed 2025-05-27 15:33:50 +00:00
parent 49c7124776
commit e3e51fb83a
8 changed files with 41 additions and 5 deletions

View file

@ -158,7 +158,7 @@ class Cfg(Namespace):
ex = "au_vol dl_list mtab_age reg_cap s_thead s_tbody th_convt ups_who zip_who"
ka.update(**{k: 9 for k in ex.split()})
ex = "db_act forget_ip k304 loris no304 re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo u2ow zipmaxn zipmaxs"
ex = "db_act forget_ip k304 loris no304 nosubtle re_maxage rproxy rsp_jtr rsp_slp s_wr_slp snap_wri theme themes turbo u2ow zipmaxn zipmaxs"
ka.update(**{k: 0 for k in ex.split()})
ex = "ah_alg bname chpw_db doctitle df exit favico idp_h_usr ipa html_head lg_sba lg_sbf log_fk md_sba md_sbf name og_desc og_site og_th og_title og_title_a og_title_v og_title_i shr tcolor textfiles unlist vname xff_src zipmaxt R RS SR"