Warning: Permanently added '34.227.193.145' (ED25519) to the list of known hosts. Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-01m0kx71/python-tapyoca/python-tapyoca.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-01m0kx71/python-tapyoca --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1740863293.500096 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.0 starting (python version = 3.13.0, NVR = mock-6.0-1.fc41), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-01m0kx71/python-tapyoca/python-tapyoca.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-01m0kx71/python-tapyoca --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1740863293.500096 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-01m0kx71/python-tapyoca/python-tapyoca.spec) Config(rhel+epel-9-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.0 INFO: Mock Version: 6.0 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/rhel+epel-9-x86_64-bootstrap-1740863293.500096/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.access.redhat.com/ubi9/ubi INFO: Pulling image: registry.access.redhat.com/ubi9/ubi INFO: Tagging container image as mock-bootstrap-71d8d894-b953-4827-b449-61216ae36666 INFO: Checking that 9125c6cae49371248bbb4de0d98e4ce5dec119d521a5244f8bb4a4ee6c8d3f1c image matches host's architecture INFO: Copy content of container 9125c6cae49371248bbb4de0d98e4ce5dec119d521a5244f8bb4a4ee6c8d3f1c to /var/lib/mock/rhel+epel-9-x86_64-bootstrap-1740863293.500096/root INFO: mounting 9125c6cae49371248bbb4de0d98e4ce5dec119d521a5244f8bb4a4ee6c8d3f1c with podman image mount INFO: image 9125c6cae49371248bbb4de0d98e4ce5dec119d521a5244f8bb4a4ee6c8d3f1c as /var/lib/containers/storage/overlay/708bb73104717130dfb640b54783babb28adfb2296ef1fb40023808c1861e3ad/merged INFO: umounting image 9125c6cae49371248bbb4de0d98e4ce5dec119d521a5244f8bb4a4ee6c8d3f1c (/var/lib/containers/storage/overlay/708bb73104717130dfb640b54783babb28adfb2296ef1fb40023808c1861e3ad/merged) with podman image umount INFO: Removing image mock-bootstrap-71d8d894-b953-4827-b449-61216ae36666 INFO: Package manager dnf4 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf4 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.16.1.3-34.el9.x86_64 python3-dnf-4.14.0-17.el9.noarch python3-dnf-plugins-core-4.3.0-16.el9.noarch yum-4.14.0-17.el9.noarch Start: installing minimal buildroot with dnf No matches found for the following disable plugin patterns: local, spacewalk, versionlock Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Copr repository 421 MB/s | 65 MB 00:00 Red Hat Enterprise Linux - BaseOS 140 MB/s | 46 MB 00:00 Red Hat Enterprise Linux - AppStream 199 MB/s | 50 MB 00:00 Red Hat Enterprise Linux - CodeReady Linux Buil 44 MB/s | 11 MB 00:00 Extra Packages for Enterprise Linux 9 - x86_64 171 MB/s | 23 MB 00:00 Last metadata expiration check: 0:00:02 ago on Sat Mar 1 21:09:19 2025. Dependencies resolved. ================================================================================ Package Arch Version Repo Size ================================================================================ Installing: bash x86_64 5.1.8-9.el9 baseos 1.7 M bzip2 x86_64 1.0.8-10.el9_5 baseos 60 k coreutils x86_64 8.32-36.el9 baseos 1.2 M cpio x86_64 2.13-16.el9 baseos 279 k diffutils x86_64 3.7-12.el9 baseos 402 k epel-rpm-macros noarch 9-15.el9 epel 15 k findutils x86_64 1:4.8.0-7.el9 baseos 550 k gawk x86_64 5.1.0-6.el9 baseos 1.0 M glibc-minimal-langpack x86_64 2.34-125.el9_5.1 baseos 23 k grep x86_64 3.6-5.el9 baseos 273 k gzip x86_64 1.12-1.el9 baseos 167 k info x86_64 6.7-15.el9 baseos 228 k patch x86_64 2.7.6-16.el9 appstream 130 k redhat-release x86_64 9.5-0.6.el9 baseos 45 k redhat-rpm-config noarch 208-1.el9 appstream 75 k rpm-build x86_64 4.16.1.3-34.el9 appstream 71 k sed x86_64 4.8-9.el9 baseos 309 k tar x86_64 2:1.34-7.el9 baseos 889 k unzip x86_64 6.0-57.el9 baseos 186 k util-linux x86_64 2.37.4-20.el9 baseos 2.3 M which x86_64 2.21-29.el9 baseos 45 k xz x86_64 5.2.5-8.el9_0 baseos 230 k Installing dependencies: alternatives x86_64 1.24-1.el9_5.1 baseos 42 k ansible-srpm-macros noarch 1-16.el9 epel 21 k audit-libs x86_64 3.1.5-1.el9 baseos 124 k basesystem noarch 11-13.el9 baseos 8.0 k binutils x86_64 2.35.2-54.el9 baseos 4.6 M binutils-gold x86_64 2.35.2-54.el9 baseos 735 k bzip2-libs x86_64 1.0.8-10.el9_5 baseos 42 k ca-certificates noarch 2024.2.69_v8.0.303-91.4.el9_4 baseos 1.0 M coreutils-common x86_64 8.32-36.el9 baseos 2.0 M cracklib x86_64 2.9.6-27.el9 baseos 99 k cracklib-dicts x86_64 2.9.6-27.el9 baseos 3.6 M crypto-policies noarch 20240828-2.git626aa59.el9_5 baseos 90 k curl x86_64 7.76.1-31.el9 baseos 297 k cyrus-sasl-lib x86_64 2.1.27-21.el9 baseos 774 k debugedit x86_64 5.0-5.el9 appstream 81 k dwz x86_64 0.14-3.el9 appstream 130 k ed x86_64 1.14.2-12.el9 baseos 78 k efi-srpm-macros noarch 6-2.el9_0 appstream 24 k elfutils x86_64 0.191-4.el9 baseos 566 k elfutils-debuginfod-client x86_64 0.191-4.el9 baseos 39 k elfutils-default-yama-scope noarch 0.191-4.el9 baseos 12 k elfutils-libelf x86_64 0.191-4.el9 baseos 210 k elfutils-libs x86_64 0.191-4.el9 baseos 262 k file x86_64 5.39-16.el9 baseos 52 k file-libs x86_64 5.39-16.el9 baseos 593 k filesystem x86_64 3.16-5.el9 baseos 4.8 M fonts-srpm-macros noarch 1:2.0.5-7.el9.1 appstream 29 k forge-srpm-macros noarch 0.4.0-1.el9 epel 19 k fpc-srpm-macros noarch 1.3-7.el9 epel 7.5 k gdb-minimal x86_64 14.2-3.el9 appstream 4.2 M gdbm-libs x86_64 1:1.23-1.el9 baseos 59 k ghc-srpm-macros noarch 1.5.0-6.el9 appstream 9.0 k ghc-srpm-macros-epel noarch 1.9.1-1.el9 epel 7.5 k glibc x86_64 2.34-125.el9_5.1 baseos 2.0 M glibc-common x86_64 2.34-125.el9_5.1 baseos 308 k glibc-gconv-extra x86_64 2.34-125.el9_5.1 baseos 1.7 M gmp x86_64 1:6.2.0-13.el9 baseos 319 k go-srpm-macros noarch 3.6.0-3.el9 appstream 28 k go-srpm-macros-epel noarch 3.6.0.1-1.el9 epel 26 k groff-base x86_64 1.22.4-10.el9 baseos 1.1 M kernel-srpm-macros noarch 1.0-13.el9 appstream 17 k keyutils-libs x86_64 1.6.3-1.el9 baseos 34 k krb5-libs x86_64 1.21.1-4.el9_5 baseos 771 k libacl x86_64 2.3.1-4.el9 baseos 24 k libarchive x86_64 3.5.3-4.el9 baseos 392 k libattr x86_64 2.5.1-3.el9 baseos 20 k libblkid x86_64 2.37.4-20.el9 baseos 109 k libbrotli x86_64 1.0.9-7.el9_5 baseos 316 k libcap x86_64 2.48-9.el9_2 baseos 74 k libcap-ng x86_64 0.8.2-7.el9 baseos 36 k libcom_err x86_64 1.46.5-5.el9 baseos 28 k libcurl x86_64 7.76.1-31.el9 baseos 286 k libdb x86_64 5.3.28-54.el9 baseos 737 k libeconf x86_64 0.4.1-4.el9 baseos 30 k libevent x86_64 2.1.12-8.el9_4 baseos 266 k libfdisk x86_64 2.37.4-20.el9 baseos 155 k libffi x86_64 3.4.2-8.el9 baseos 40 k libgcc x86_64 11.5.0-5.el9_5 baseos 88 k libgcrypt x86_64 1.10.0-11.el9 baseos 510 k libgomp x86_64 11.5.0-5.el9_5 baseos 263 k libgpg-error x86_64 1.42-5.el9 baseos 220 k libidn2 x86_64 2.3.0-7.el9 baseos 105 k libmount x86_64 2.37.4-20.el9 baseos 136 k libnghttp2 x86_64 1.43.0-6.el9 baseos 75 k libpkgconf x86_64 1.7.3-10.el9 baseos 37 k libpsl x86_64 0.21.1-5.el9 baseos 66 k libpwquality x86_64 1.4.4-8.el9 baseos 123 k libselinux x86_64 3.6-1.el9 baseos 88 k libsemanage x86_64 3.6-2.1.el9_5 baseos 120 k libsepol x86_64 3.6-1.el9 baseos 331 k libsigsegv x86_64 2.13-4.el9 baseos 30 k libsmartcols x86_64 2.37.4-20.el9 baseos 64 k libssh x86_64 0.10.4-13.el9 baseos 220 k libssh-config noarch 0.10.4-13.el9 baseos 11 k libstdc++ x86_64 11.5.0-5.el9_5 baseos 742 k libtasn1 x86_64 4.16.0-8.el9_1 baseos 77 k libunistring x86_64 0.9.10-15.el9 baseos 499 k libutempter x86_64 1.2.1-6.el9 baseos 30 k libuuid x86_64 2.37.4-20.el9 baseos 30 k libverto x86_64 0.3.2-3.el9 baseos 24 k libxcrypt x86_64 4.4.18-3.el9 baseos 120 k libxml2 x86_64 2.9.13-6.el9_5.1 baseos 752 k libzstd x86_64 1.5.1-2.el9 baseos 332 k lua-libs x86_64 5.4.4-4.el9 baseos 132 k lua-srpm-macros noarch 1-6.el9 appstream 10 k lz4-libs x86_64 1.9.3-5.el9 baseos 69 k mpfr x86_64 4.1.0-7.el9 baseos 329 k ncurses x86_64 6.2-10.20210508.el9 baseos 410 k ncurses-base noarch 6.2-10.20210508.el9 baseos 99 k ncurses-libs x86_64 6.2-10.20210508.el9 baseos 332 k ocaml-srpm-macros noarch 6-6.el9 appstream 9.1 k openblas-srpm-macros noarch 2-11.el9 appstream 8.6 k openldap x86_64 2.6.6-3.el9 baseos 286 k openssl x86_64 1:3.2.2-6.el9_5.1 baseos 1.4 M openssl-fips-provider x86_64 3.0.7-6.el9_5 baseos 9.4 k openssl-fips-provider-so x86_64 3.0.7-6.el9_5 baseos 577 k openssl-libs x86_64 1:3.2.2-6.el9_5.1 baseos 2.1 M p11-kit x86_64 0.25.3-3.el9_5 baseos 536 k p11-kit-trust x86_64 0.25.3-3.el9_5 baseos 144 k pam x86_64 1.5.1-22.el9_5 baseos 632 k pcre x86_64 8.44-4.el9 baseos 200 k pcre2 x86_64 10.40-6.el9 baseos 236 k pcre2-syntax noarch 10.40-6.el9 baseos 144 k perl-AutoLoader noarch 5.74-481.el9 appstream 21 k perl-B x86_64 1.80-481.el9 appstream 184 k perl-Carp noarch 1.50-460.el9 appstream 31 k perl-Class-Struct noarch 0.66-481.el9 appstream 22 k perl-Data-Dumper x86_64 2.174-462.el9 appstream 59 k perl-Digest noarch 1.19-4.el9 appstream 29 k perl-Digest-MD5 x86_64 2.58-4.el9 appstream 39 k perl-Encode x86_64 4:3.08-462.el9 appstream 1.7 M perl-Errno x86_64 1.30-481.el9 appstream 15 k perl-Exporter noarch 5.74-461.el9 appstream 34 k perl-Fcntl x86_64 1.13-481.el9 appstream 22 k perl-File-Basename noarch 2.85-481.el9 appstream 17 k perl-File-Path noarch 2.18-4.el9 appstream 38 k perl-File-Temp noarch 1:0.231.100-4.el9 appstream 63 k perl-File-stat noarch 1.09-481.el9 appstream 17 k perl-FileHandle noarch 2.03-481.el9 appstream 16 k perl-Getopt-Long noarch 1:2.52-4.el9 appstream 64 k perl-Getopt-Std noarch 1.12-481.el9 appstream 16 k perl-HTTP-Tiny noarch 0.076-462.el9 appstream 57 k perl-IO x86_64 1.43-481.el9 appstream 92 k perl-IO-Socket-IP noarch 0.41-5.el9 appstream 45 k perl-IO-Socket-SSL noarch 2.073-2.el9 appstream 221 k perl-IPC-Open3 noarch 1.21-481.el9 appstream 24 k perl-MIME-Base64 x86_64 3.16-4.el9 appstream 34 k perl-Mozilla-CA noarch 20200520-6.el9 appstream 14 k perl-Net-SSLeay x86_64 1.94-1.el9 appstream 418 k perl-POSIX x86_64 1.94-481.el9 appstream 98 k perl-PathTools x86_64 3.78-461.el9 appstream 92 k perl-Pod-Escapes noarch 1:1.07-460.el9 appstream 22 k perl-Pod-Perldoc noarch 3.28.01-461.el9 appstream 92 k perl-Pod-Simple noarch 1:3.42-4.el9 appstream 229 k perl-Pod-Usage noarch 4:2.01-4.el9 appstream 43 k perl-Scalar-List-Utils x86_64 4:1.56-462.el9 appstream 75 k perl-SelectSaver noarch 1.02-481.el9 appstream 12 k perl-Socket x86_64 4:2.031-4.el9 appstream 58 k perl-Storable x86_64 1:3.21-460.el9 appstream 98 k perl-Symbol noarch 1.08-481.el9 appstream 14 k perl-Term-ANSIColor noarch 5.01-461.el9 appstream 51 k perl-Term-Cap noarch 1.17-460.el9 appstream 24 k perl-Text-ParseWords noarch 3.30-460.el9 appstream 18 k perl-Text-Tabs+Wrap noarch 2013.0523-460.el9 appstream 25 k perl-Time-Local noarch 2:1.300-7.el9 appstream 37 k perl-URI noarch 5.09-3.el9 appstream 125 k perl-base noarch 2.27-481.el9 appstream 16 k perl-constant noarch 1.33-461.el9 appstream 25 k perl-if noarch 0.60.800-481.el9 appstream 14 k perl-interpreter x86_64 4:5.32.1-481.el9 appstream 73 k perl-libnet noarch 3.13-4.el9 appstream 134 k perl-libs x86_64 4:5.32.1-481.el9 appstream 2.2 M perl-mro x86_64 1.23-481.el9 appstream 29 k perl-overload noarch 1.31-481.el9 appstream 46 k perl-overloading noarch 0.02-481.el9 appstream 13 k perl-parent noarch 1:0.238-460.el9 appstream 16 k perl-podlators noarch 1:4.14-460.el9 appstream 118 k perl-srpm-macros noarch 1-41.el9 appstream 9.4 k perl-subs noarch 1.03-481.el9 appstream 12 k perl-vars noarch 1.05-481.el9 appstream 13 k pkgconf x86_64 1.7.3-10.el9 baseos 45 k pkgconf-m4 noarch 1.7.3-10.el9 baseos 16 k pkgconf-pkg-config x86_64 1.7.3-10.el9 baseos 12 k popt x86_64 1.18-8.el9 baseos 69 k publicsuffix-list-dafsa noarch 20210518-3.el9 baseos 59 k pyproject-srpm-macros noarch 1.12.0-1.el9 appstream 14 k python-srpm-macros noarch 3.9-54.el9 appstream 18 k qt5-srpm-macros noarch 5.15.9-1.el9 appstream 9.1 k qt6-srpm-macros noarch 6.6.2-1.el9 epel 8.7 k readline x86_64 8.1-4.el9 baseos 215 k rpm x86_64 4.16.1.3-34.el9 baseos 541 k rpm-build-libs x86_64 4.16.1.3-34.el9 baseos 90 k rpm-libs x86_64 4.16.1.3-34.el9 baseos 311 k rpmautospec-rpm-macros noarch 0.7.3-1.el9 epel 10 k rust-srpm-macros noarch 17-4.el9 appstream 11 k rust-srpm-macros-epel noarch 26.3-1.el9 epel 10 k setup noarch 2.13.7-10.el9 baseos 150 k shadow-utils x86_64 2:4.9-10.el9_5 baseos 1.2 M sqlite-libs x86_64 3.34.1-7.el9_3 baseos 622 k systemd-libs x86_64 252-46.el9_5.2 baseos 684 k tzdata noarch 2025a-1.el9 baseos 841 k util-linux-core x86_64 2.37.4-20.el9 baseos 468 k xz-libs x86_64 5.2.5-8.el9_0 baseos 94 k zip x86_64 3.0-35.el9 baseos 270 k zlib x86_64 1.2.11-40.el9 baseos 93 k zstd x86_64 1.5.1-2.el9 baseos 552 k Transaction Summary ================================================================================ Install 208 Packages Total download size: 69 M Installed size: 219 M Downloading Packages: (1/208): basesystem-11-13.el9.noarch.rpm 127 kB/s | 8.0 kB 00:00 (2/208): cracklib-2.9.6-27.el9.x86_64.rpm 1.3 MB/s | 99 kB 00:00 (3/208): cpio-2.13-16.el9.x86_64.rpm 3.3 MB/s | 279 kB 00:00 (4/208): ed-1.14.2-12.el9.x86_64.rpm 1.6 MB/s | 78 kB 00:00 (5/208): grep-3.6-5.el9.x86_64.rpm 5.7 MB/s | 273 kB 00:00 (6/208): groff-base-1.22.4-10.el9.x86_64.rpm 11 MB/s | 1.1 MB 00:00 (7/208): libattr-2.5.1-3.el9.x86_64.rpm 320 kB/s | 20 kB 00:00 (8/208): libcap-ng-0.8.2-7.el9.x86_64.rpm 814 kB/s | 36 kB 00:00 (9/208): libgpg-error-1.42-5.el9.x86_64.rpm 3.6 MB/s | 220 kB 00:00 (10/208): libpsl-0.21.1-5.el9.x86_64.rpm 1.3 MB/s | 66 kB 00:00 (11/208): libidn2-2.3.0-7.el9.x86_64.rpm 1.5 MB/s | 105 kB 00:00 (12/208): libsigsegv-2.13-4.el9.x86_64.rpm 724 kB/s | 30 kB 00:00 (13/208): libpwquality-1.4.4-8.el9.x86_64.rpm 2.5 MB/s | 123 kB 00:00 (14/208): libunistring-0.9.10-15.el9.x86_64.rpm 9.7 MB/s | 499 kB 00:00 (15/208): libutempter-1.2.1-6.el9.x86_64.rpm 566 kB/s | 30 kB 00:00 (16/208): info-6.7-15.el9.x86_64.rpm 792 kB/s | 228 kB 00:00 (17/208): libxcrypt-4.4.18-3.el9.x86_64.rpm 2.6 MB/s | 120 kB 00:00 (18/208): libzstd-1.5.1-2.el9.x86_64.rpm 7.5 MB/s | 332 kB 00:00 (19/208): libverto-0.3.2-3.el9.x86_64.rpm 291 kB/s | 24 kB 00:00 (20/208): mpfr-4.1.0-7.el9.x86_64.rpm 7.4 MB/s | 329 kB 00:00 (21/208): lz4-libs-1.9.3-5.el9.x86_64.rpm 1.0 MB/s | 69 kB 00:00 (22/208): popt-1.18-8.el9.x86_64.rpm 1.3 MB/s | 69 kB 00:00 (23/208): publicsuffix-list-dafsa-20210518-3.el 926 kB/s | 59 kB 00:00 (24/208): sed-4.8-9.el9.x86_64.rpm 5.6 MB/s | 309 kB 00:00 (25/208): diffutils-3.7-12.el9.x86_64.rpm 8.2 MB/s | 402 kB 00:00 (26/208): readline-8.1-4.el9.x86_64.rpm 1.4 MB/s | 215 kB 00:00 (27/208): cracklib-dicts-2.9.6-27.el9.x86_64.rp 31 MB/s | 3.6 MB 00:00 (28/208): xz-5.2.5-8.el9_0.x86_64.rpm 4.3 MB/s | 230 kB 00:00 (29/208): gawk-5.1.0-6.el9.x86_64.rpm 11 MB/s | 1.0 MB 00:00 (30/208): xz-libs-5.2.5-8.el9_0.x86_64.rpm 1.6 MB/s | 94 kB 00:00 (31/208): zstd-1.5.1-2.el9.x86_64.rpm 11 MB/s | 552 kB 00:00 (32/208): gzip-1.12-1.el9.x86_64.rpm 3.6 MB/s | 167 kB 00:00 (33/208): cyrus-sasl-lib-2.1.27-21.el9.x86_64.r 12 MB/s | 774 kB 00:00 (34/208): libarchive-3.5.3-4.el9.x86_64.rpm 8.3 MB/s | 392 kB 00:00 (35/208): libtasn1-4.16.0-8.el9_1.x86_64.rpm 1.0 MB/s | 77 kB 00:00 (36/208): keyutils-libs-1.6.3-1.el9.x86_64.rpm 274 kB/s | 34 kB 00:00 (37/208): libcap-2.48-9.el9_2.x86_64.rpm 837 kB/s | 74 kB 00:00 (38/208): libpkgconf-1.7.3-10.el9.x86_64.rpm 364 kB/s | 37 kB 00:00 (39/208): pkgconf-m4-1.7.3-10.el9.noarch.rpm 228 kB/s | 16 kB 00:00 (40/208): pkgconf-pkg-config-1.7.3-10.el9.x86_6 141 kB/s | 12 kB 00:00 (41/208): zip-3.0-35.el9.x86_64.rpm 5.0 MB/s | 270 kB 00:00 (42/208): gmp-6.2.0-13.el9.x86_64.rpm 6.0 MB/s | 319 kB 00:00 (43/208): libffi-3.4.2-8.el9.x86_64.rpm 885 kB/s | 40 kB 00:00 (44/208): lua-libs-5.4.4-4.el9.x86_64.rpm 1.9 MB/s | 132 kB 00:00 (45/208): ncurses-6.2-10.20210508.el9.x86_64.rp 5.7 MB/s | 410 kB 00:00 (46/208): pkgconf-1.7.3-10.el9.x86_64.rpm 173 kB/s | 45 kB 00:00 (47/208): ncurses-base-6.2-10.20210508.el9.noar 2.0 MB/s | 99 kB 00:00 (48/208): which-2.21-29.el9.x86_64.rpm 741 kB/s | 45 kB 00:00 (49/208): zlib-1.2.11-40.el9.x86_64.rpm 2.0 MB/s | 93 kB 00:00 (50/208): ncurses-libs-6.2-10.20210508.el9.x86_ 3.5 MB/s | 332 kB 00:00 (51/208): bash-5.1.8-9.el9.x86_64.rpm 25 MB/s | 1.7 MB 00:00 (52/208): file-5.39-16.el9.x86_64.rpm 668 kB/s | 52 kB 00:00 (53/208): file-libs-5.39-16.el9.x86_64.rpm 5.8 MB/s | 593 kB 00:00 (54/208): libacl-2.3.1-4.el9.x86_64.rpm 338 kB/s | 24 kB 00:00 (55/208): libsepol-3.6-1.el9.x86_64.rpm 7.3 MB/s | 331 kB 00:00 (56/208): libselinux-3.6-1.el9.x86_64.rpm 905 kB/s | 88 kB 00:00 (57/208): sqlite-libs-3.34.1-7.el9_3.x86_64.rpm 14 MB/s | 622 kB 00:00 (58/208): openldap-2.6.6-3.el9.x86_64.rpm 2.4 MB/s | 286 kB 00:00 (59/208): setup-2.13.7-10.el9.noarch.rpm 1.4 MB/s | 150 kB 00:00 (60/208): ca-certificates-2024.2.69_v8.0.303-91 14 MB/s | 1.0 MB 00:00 (61/208): libcom_err-1.46.5-5.el9.x86_64.rpm 392 kB/s | 28 kB 00:00 (62/208): libssh-0.10.4-13.el9.x86_64.rpm 4.9 MB/s | 220 kB 00:00 (63/208): libevent-2.1.12-8.el9_4.x86_64.rpm 2.9 MB/s | 266 kB 00:00 (64/208): libssh-config-0.10.4-13.el9.noarch.rp 237 kB/s | 11 kB 00:00 (65/208): alternatives-1.24-1.el9_5.1.x86_64.rp 695 kB/s | 42 kB 00:00 (66/208): binutils-2.35.2-54.el9.x86_64.rpm 77 MB/s | 4.6 MB 00:00 (67/208): audit-libs-3.1.5-1.el9.x86_64.rpm 1.7 MB/s | 124 kB 00:00 (68/208): binutils-gold-2.35.2-54.el9.x86_64.rp 16 MB/s | 735 kB 00:00 (69/208): coreutils-common-8.32-36.el9.x86_64.r 36 MB/s | 2.0 MB 00:00 (70/208): coreutils-8.32-36.el9.x86_64.rpm 15 MB/s | 1.2 MB 00:00 (71/208): crypto-policies-20240828-2.git626aa59 1.5 MB/s | 90 kB 00:00 (72/208): elfutils-0.191-4.el9.x86_64.rpm 11 MB/s | 566 kB 00:00 (73/208): elfutils-debuginfod-client-0.191-4.el 668 kB/s | 39 kB 00:00 (74/208): curl-7.76.1-31.el9.x86_64.rpm 3.4 MB/s | 297 kB 00:00 (75/208): elfutils-default-yama-scope-0.191-4.e 177 kB/s | 12 kB 00:00 (76/208): elfutils-libelf-0.191-4.el9.x86_64.rp 3.0 MB/s | 210 kB 00:00 (77/208): elfutils-libs-0.191-4.el9.x86_64.rpm 2.4 MB/s | 262 kB 00:00 (78/208): filesystem-3.16-5.el9.x86_64.rpm 79 MB/s | 4.8 MB 00:00 (79/208): findutils-4.8.0-7.el9.x86_64.rpm 8.9 MB/s | 550 kB 00:00 (80/208): gdbm-libs-1.23-1.el9.x86_64.rpm 923 kB/s | 59 kB 00:00 (81/208): glibc-common-2.34-125.el9_5.1.x86_64. 5.1 MB/s | 308 kB 00:00 (82/208): glibc-2.34-125.el9_5.1.x86_64.rpm 27 MB/s | 2.0 MB 00:00 (83/208): glibc-gconv-extra-2.34-125.el9_5.1.x8 29 MB/s | 1.7 MB 00:00 (84/208): libblkid-2.37.4-20.el9.x86_64.rpm 2.0 MB/s | 109 kB 00:00 (85/208): glibc-minimal-langpack-2.34-125.el9_5 363 kB/s | 23 kB 00:00 (86/208): libcurl-7.76.1-31.el9.x86_64.rpm 6.2 MB/s | 286 kB 00:00 (87/208): libdb-5.3.28-54.el9.x86_64.rpm 16 MB/s | 737 kB 00:00 (88/208): libeconf-0.4.1-4.el9.x86_64.rpm 700 kB/s | 30 kB 00:00 (89/208): libfdisk-2.37.4-20.el9.x86_64.rpm 3.1 MB/s | 155 kB 00:00 (90/208): libgcrypt-1.10.0-11.el9.x86_64.rpm 10 MB/s | 510 kB 00:00 (91/208): libmount-2.37.4-20.el9.x86_64.rpm 2.7 MB/s | 136 kB 00:00 (92/208): libnghttp2-1.43.0-6.el9.x86_64.rpm 1.6 MB/s | 75 kB 00:00 (93/208): libsmartcols-2.37.4-20.el9.x86_64.rpm 1.5 MB/s | 64 kB 00:00 (94/208): libuuid-2.37.4-20.el9.x86_64.rpm 677 kB/s | 30 kB 00:00 (95/208): openssl-fips-provider-3.0.7-6.el9_5.x 195 kB/s | 9.4 kB 00:00 (96/208): openssl-fips-provider-so-3.0.7-6.el9_ 13 MB/s | 577 kB 00:00 (97/208): pcre-8.44-4.el9.x86_64.rpm 4.5 MB/s | 200 kB 00:00 (98/208): pcre2-10.40-6.el9.x86_64.rpm 5.2 MB/s | 236 kB 00:00 (99/208): pcre2-syntax-10.40-6.el9.noarch.rpm 3.2 MB/s | 144 kB 00:00 (100/208): redhat-release-9.5-0.6.el9.x86_64.rp 673 kB/s | 45 kB 00:00 (101/208): rpm-4.16.1.3-34.el9.x86_64.rpm 10 MB/s | 541 kB 00:00 (102/208): rpm-build-libs-4.16.1.3-34.el9.x86_6 1.2 MB/s | 90 kB 00:00 (103/208): systemd-libs-252-46.el9_5.2.x86_64.r 14 MB/s | 684 kB 00:00 (104/208): rpm-libs-4.16.1.3-34.el9.x86_64.rpm 4.2 MB/s | 311 kB 00:00 (105/208): tar-1.34-7.el9.x86_64.rpm 18 MB/s | 889 kB 00:00 (106/208): unzip-6.0-57.el9.x86_64.rpm 3.0 MB/s | 186 kB 00:00 (107/208): util-linux-2.37.4-20.el9.x86_64.rpm 38 MB/s | 2.3 MB 00:00 (108/208): util-linux-core-2.37.4-20.el9.x86_64 10 MB/s | 468 kB 00:00 (109/208): bzip2-1.0.8-10.el9_5.x86_64.rpm 1.3 MB/s | 60 kB 00:00 (110/208): bzip2-libs-1.0.8-10.el9_5.x86_64.rpm 994 kB/s | 42 kB 00:00 (111/208): krb5-libs-1.21.1-4.el9_5.x86_64.rpm 16 MB/s | 771 kB 00:00 (112/208): libgcc-11.5.0-5.el9_5.x86_64.rpm 2.0 MB/s | 88 kB 00:00 (113/208): libgomp-11.5.0-5.el9_5.x86_64.rpm 5.7 MB/s | 263 kB 00:00 (114/208): libbrotli-1.0.9-7.el9_5.x86_64.rpm 3.6 MB/s | 316 kB 00:00 (115/208): libsemanage-3.6-2.1.el9_5.x86_64.rpm 2.4 MB/s | 120 kB 00:00 (116/208): libstdc++-11.5.0-5.el9_5.x86_64.rpm 15 MB/s | 742 kB 00:00 (117/208): openssl-3.2.2-6.el9_5.1.x86_64.rpm 29 MB/s | 1.4 MB 00:00 (118/208): p11-kit-0.25.3-3.el9_5.x86_64.rpm 12 MB/s | 536 kB 00:00 (119/208): openssl-libs-3.2.2-6.el9_5.1.x86_64. 26 MB/s | 2.1 MB 00:00 (120/208): libxml2-2.9.13-6.el9_5.1.x86_64.rpm 5.9 MB/s | 752 kB 00:00 (121/208): pam-1.5.1-22.el9_5.x86_64.rpm 14 MB/s | 632 kB 00:00 (122/208): p11-kit-trust-0.25.3-3.el9_5.x86_64. 1.6 MB/s | 144 kB 00:00 (123/208): tzdata-2025a-1.el9.noarch.rpm 18 MB/s | 841 kB 00:00 (124/208): shadow-utils-4.9-10.el9_5.x86_64.rpm 17 MB/s | 1.2 MB 00:00 (125/208): lua-srpm-macros-1-6.el9.noarch.rpm 234 kB/s | 10 kB 00:00 (126/208): ghc-srpm-macros-1.5.0-6.el9.noarch.r 146 kB/s | 9.0 kB 00:00 (127/208): perl-Data-Dumper-2.174-462.el9.x86_6 888 kB/s | 59 kB 00:00 (128/208): perl-Digest-MD5-2.58-4.el9.x86_64.rp 924 kB/s | 39 kB 00:00 (129/208): perl-Encode-3.08-462.el9.x86_64.rpm 34 MB/s | 1.7 MB 00:00 (130/208): perl-Mozilla-CA-20200520-6.el9.noarc 320 kB/s | 14 kB 00:00 (131/208): perl-Term-Cap-1.17-460.el9.noarch.rp 564 kB/s | 24 kB 00:00 (132/208): perl-Pod-Simple-3.42-4.el9.noarch.rp 3.0 MB/s | 229 kB 00:00 (133/208): perl-Time-Local-1.300-7.el9.noarch.r 734 kB/s | 37 kB 00:00 (134/208): rust-srpm-macros-17-4.el9.noarch.rpm 261 kB/s | 11 kB 00:00 (135/208): perl-Pod-Escapes-1.07-460.el9.noarch 480 kB/s | 22 kB 00:00 (136/208): perl-Storable-3.21-460.el9.x86_64.rp 1.8 MB/s | 98 kB 00:00 (137/208): perl-podlators-4.14-460.el9.noarch.r 2.6 MB/s | 118 kB 00:00 (138/208): dwz-0.14-3.el9.x86_64.rpm 3.0 MB/s | 130 kB 00:00 (139/208): perl-Exporter-5.74-461.el9.noarch.rp 699 kB/s | 34 kB 00:00 (140/208): perl-File-Temp-0.231.100-4.el9.noarc 1.4 MB/s | 63 kB 00:00 (141/208): perl-Getopt-Long-2.52-4.el9.noarch.r 1.2 MB/s | 64 kB 00:00 (142/208): perl-MIME-Base64-3.16-4.el9.x86_64.r 795 kB/s | 34 kB 00:00 (143/208): perl-Text-Tabs+Wrap-2013.0523-460.el 550 kB/s | 25 kB 00:00 (144/208): perl-srpm-macros-1-41.el9.noarch.rpm 209 kB/s | 9.4 kB 00:00 (145/208): perl-parent-0.238-460.el9.noarch.rpm 273 kB/s | 16 kB 00:00 (146/208): efi-srpm-macros-6-2.el9_0.noarch.rpm 527 kB/s | 24 kB 00:00 (147/208): openblas-srpm-macros-2-11.el9.noarch 198 kB/s | 8.6 kB 00:00 (148/208): perl-Digest-1.19-4.el9.noarch.rpm 645 kB/s | 29 kB 00:00 (149/208): perl-File-Path-2.18-4.el9.noarch.rpm 827 kB/s | 38 kB 00:00 (150/208): perl-IO-Socket-IP-0.41-5.el9.noarch. 856 kB/s | 45 kB 00:00 (151/208): perl-Pod-Usage-2.01-4.el9.noarch.rpm 934 kB/s | 43 kB 00:00 (152/208): perl-Socket-2.031-4.el9.x86_64.rpm 1.3 MB/s | 58 kB 00:00 (153/208): perl-Text-ParseWords-3.30-460.el9.no 399 kB/s | 18 kB 00:00 (154/208): perl-Term-ANSIColor-5.01-461.el9.noa 908 kB/s | 51 kB 00:00 (155/208): perl-URI-5.09-3.el9.noarch.rpm 1.9 MB/s | 125 kB 00:00 (156/208): perl-constant-1.33-461.el9.noarch.rp 430 kB/s | 25 kB 00:00 (157/208): fonts-srpm-macros-2.0.5-7.el9.1.noar 462 kB/s | 29 kB 00:00 (158/208): ocaml-srpm-macros-6-6.el9.noarch.rpm 212 kB/s | 9.1 kB 00:00 (159/208): patch-2.7.6-16.el9.x86_64.rpm 2.8 MB/s | 130 kB 00:00 (160/208): perl-Pod-Perldoc-3.28.01-461.el9.noa 2.1 MB/s | 92 kB 00:00 (161/208): perl-libnet-3.13-4.el9.noarch.rpm 3.1 MB/s | 134 kB 00:00 (162/208): perl-Carp-1.50-460.el9.noarch.rpm 665 kB/s | 31 kB 00:00 (163/208): perl-PathTools-3.78-461.el9.x86_64.r 2.1 MB/s | 92 kB 00:00 (164/208): kernel-srpm-macros-1.0-13.el9.noarch 400 kB/s | 17 kB 00:00 (165/208): qt5-srpm-macros-5.15.9-1.el9.noarch. 199 kB/s | 9.1 kB 00:00 (166/208): perl-AutoLoader-5.74-481.el9.noarch. 494 kB/s | 21 kB 00:00 (167/208): perl-if-0.60.800-481.el9.noarch.rpm 299 kB/s | 14 kB 00:00 (168/208): perl-SelectSaver-1.02-481.el9.noarch 138 kB/s | 12 kB 00:00 (169/208): perl-overloading-0.02-481.el9.noarch 287 kB/s | 13 kB 00:00 (170/208): perl-Errno-1.30-481.el9.x86_64.rpm 321 kB/s | 15 kB 00:00 (171/208): perl-Class-Struct-0.66-481.el9.noarc 433 kB/s | 22 kB 00:00 (172/208): perl-B-1.80-481.el9.x86_64.rpm 3.1 MB/s | 184 kB 00:00 (173/208): perl-File-Basename-2.85-481.el9.noar 325 kB/s | 17 kB 00:00 (174/208): perl-FileHandle-2.03-481.el9.noarch. 270 kB/s | 16 kB 00:00 (175/208): perl-File-stat-1.09-481.el9.noarch.r 248 kB/s | 17 kB 00:00 (176/208): perl-Getopt-Std-1.12-481.el9.noarch. 349 kB/s | 16 kB 00:00 (177/208): perl-IO-1.43-481.el9.x86_64.rpm 2.1 MB/s | 92 kB 00:00 (178/208): perl-HTTP-Tiny-0.076-462.el9.noarch. 1.1 MB/s | 57 kB 00:00 (179/208): perl-IPC-Open3-1.21-481.el9.noarch.r 547 kB/s | 24 kB 00:00 (180/208): perl-POSIX-1.94-481.el9.x86_64.rpm 2.2 MB/s | 98 kB 00:00 (181/208): perl-Symbol-1.08-481.el9.noarch.rpm 313 kB/s | 14 kB 00:00 (182/208): perl-interpreter-5.32.1-481.el9.x86_ 1.6 MB/s | 73 kB 00:00 (183/208): perl-base-2.27-481.el9.noarch.rpm 242 kB/s | 16 kB 00:00 (184/208): perl-libs-5.32.1-481.el9.x86_64.rpm 28 MB/s | 2.2 MB 00:00 (185/208): perl-mro-1.23-481.el9.x86_64.rpm 677 kB/s | 29 kB 00:00 (186/208): perl-overload-1.31-481.el9.noarch.rp 1.0 MB/s | 46 kB 00:00 (187/208): perl-vars-1.05-481.el9.noarch.rpm 283 kB/s | 13 kB 00:00 (188/208): pyproject-srpm-macros-1.12.0-1.el9.n 334 kB/s | 14 kB 00:00 (189/208): debugedit-5.0-5.el9.x86_64.rpm 1.7 MB/s | 81 kB 00:00 (190/208): perl-Fcntl-1.13-481.el9.x86_64.rpm 413 kB/s | 22 kB 00:00 (191/208): go-srpm-macros-3.6.0-3.el9.noarch.rp 657 kB/s | 28 kB 00:00 (192/208): perl-subs-1.03-481.el9.noarch.rpm 135 kB/s | 12 kB 00:00 (193/208): python-srpm-macros-3.9-54.el9.noarch 405 kB/s | 18 kB 00:00 (194/208): perl-IO-Socket-SSL-2.073-2.el9.noarc 4.6 MB/s | 221 kB 00:00 (195/208): redhat-rpm-config-208-1.el9.noarch.r 617 kB/s | 75 kB 00:00 (196/208): perl-Net-SSLeay-1.94-1.el9.x86_64.rp 8.9 MB/s | 418 kB 00:00 (197/208): gdb-minimal-14.2-3.el9.x86_64.rpm 34 MB/s | 4.2 MB 00:00 (198/208): perl-Scalar-List-Utils-1.56-462.el9. 1.6 MB/s | 75 kB 00:00 (199/208): epel-rpm-macros-9-15.el9.noarch.rpm 8.2 MB/s | 15 kB 00:00 (200/208): ansible-srpm-macros-1-16.el9.noarch. 1.8 MB/s | 21 kB 00:00 (201/208): fpc-srpm-macros-1.3-7.el9.noarch.rpm 4.9 MB/s | 7.5 kB 00:00 (202/208): forge-srpm-macros-0.4.0-1.el9.noarch 8.3 MB/s | 19 kB 00:00 (203/208): ghc-srpm-macros-epel-1.9.1-1.el9.noa 3.4 MB/s | 7.5 kB 00:00 (204/208): go-srpm-macros-epel-3.6.0.1-1.el9.no 10 MB/s | 26 kB 00:00 (205/208): qt6-srpm-macros-6.6.2-1.el9.noarch.r 4.4 MB/s | 8.7 kB 00:00 (206/208): rpmautospec-rpm-macros-0.7.3-1.el9.n 4.0 MB/s | 10 kB 00:00 (207/208): rust-srpm-macros-epel-26.3-1.el9.noa 7.6 MB/s | 10 kB 00:00 (208/208): rpm-build-4.16.1.3-34.el9.x86_64.rpm 1.5 MB/s | 71 kB 00:00 -------------------------------------------------------------------------------- Total 17 MB/s | 69 MB 00:04 Red Hat Enterprise Linux - BaseOS 3.5 MB/s | 3.6 kB 00:00 Importing GPG key 0xFD431D51: Userid : "Red Hat, Inc. (release key 2) " Fingerprint: 567E 347A D004 4ADE 55BA 8A5F 199E 2F91 FD43 1D51 From : /usr/share/distribution-gpg-keys/redhat/RPM-GPG-KEY-redhat9-release Key imported successfully Importing GPG key 0x5A6340B3: Userid : "Red Hat, Inc. (auxiliary key 3) " Fingerprint: 7E46 2425 8C40 6535 D56D 6F13 5054 E4A4 5A63 40B3 From : /usr/share/distribution-gpg-keys/redhat/RPM-GPG-KEY-redhat9-release Key imported successfully Extra Packages for Enterprise Linux 9 - x86_64 1.6 MB/s | 1.6 kB 00:00 Importing GPG key 0x3228467C: Userid : "Fedora (epel9) " Fingerprint: FF8A D134 4597 106E CE81 3B91 8A38 72BF 3228 467C From : /usr/share/distribution-gpg-keys/epel/RPM-GPG-KEY-EPEL-9 Key imported successfully Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Running scriptlet: filesystem-3.16-5.el9.x86_64 1/1 Preparing : 1/1 Installing : libgcc-11.5.0-5.el9_5.x86_64 1/208 Running scriptlet: libgcc-11.5.0-5.el9_5.x86_64 1/208 Installing : rust-srpm-macros-17-4.el9.noarch 2/208 Installing : redhat-release-9.5-0.6.el9.x86_64 3/208 Installing : setup-2.13.7-10.el9.noarch 4/208 warning: /etc/hosts created as /etc/hosts.rpmnew Running scriptlet: setup-2.13.7-10.el9.noarch 4/208 Installing : filesystem-3.16-5.el9.x86_64 5/208 Installing : basesystem-11-13.el9.noarch 6/208 Installing : rust-srpm-macros-epel-26.3-1.el9.noarch 7/208 Installing : qt6-srpm-macros-6.6.2-1.el9.noarch 8/208 Installing : ghc-srpm-macros-epel-1.9.1-1.el9.noarch 9/208 Installing : fpc-srpm-macros-1.3-7.el9.noarch 10/208 Installing : ansible-srpm-macros-1-16.el9.noarch 11/208 Installing : qt5-srpm-macros-5.15.9-1.el9.noarch 12/208 Installing : ocaml-srpm-macros-6-6.el9.noarch 13/208 Installing : openblas-srpm-macros-2-11.el9.noarch 14/208 Installing : perl-srpm-macros-1-41.el9.noarch 15/208 Installing : ghc-srpm-macros-1.5.0-6.el9.noarch 16/208 Installing : tzdata-2025a-1.el9.noarch 17/208 Installing : pcre2-syntax-10.40-6.el9.noarch 18/208 Installing : coreutils-common-8.32-36.el9.x86_64 19/208 Installing : libssh-config-0.10.4-13.el9.noarch 20/208 Installing : ncurses-base-6.2-10.20210508.el9.noarch 21/208 Installing : ncurses-libs-6.2-10.20210508.el9.x86_64 22/208 Installing : bash-5.1.8-9.el9.x86_64 23/208 Running scriptlet: bash-5.1.8-9.el9.x86_64 23/208 Installing : glibc-common-2.34-125.el9_5.1.x86_64 24/208 Installing : glibc-gconv-extra-2.34-125.el9_5.1.x86_64 25/208 Running scriptlet: glibc-gconv-extra-2.34-125.el9_5.1.x86_64 25/208 Installing : glibc-minimal-langpack-2.34-125.el9_5.1.x86_64 26/208 Running scriptlet: glibc-2.34-125.el9_5.1.x86_64 27/208 Installing : glibc-2.34-125.el9_5.1.x86_64 27/208 Running scriptlet: glibc-2.34-125.el9_5.1.x86_64 27/208 Installing : zlib-1.2.11-40.el9.x86_64 28/208 Installing : xz-libs-5.2.5-8.el9_0.x86_64 29/208 Installing : bzip2-libs-1.0.8-10.el9_5.x86_64 30/208 Installing : libzstd-1.5.1-2.el9.x86_64 31/208 Installing : elfutils-libelf-0.191-4.el9.x86_64 32/208 Installing : libxcrypt-4.4.18-3.el9.x86_64 33/208 Installing : libstdc++-11.5.0-5.el9_5.x86_64 34/208 Installing : libuuid-2.37.4-20.el9.x86_64 35/208 Installing : libattr-2.5.1-3.el9.x86_64 36/208 Installing : libacl-2.3.1-4.el9.x86_64 37/208 Installing : popt-1.18-8.el9.x86_64 38/208 Installing : libcap-2.48-9.el9_2.x86_64 39/208 Installing : gmp-1:6.2.0-13.el9.x86_64 40/208 Installing : lz4-libs-1.9.3-5.el9.x86_64 41/208 Installing : readline-8.1-4.el9.x86_64 42/208 Installing : libcom_err-1.46.5-5.el9.x86_64 43/208 Installing : crypto-policies-20240828-2.git626aa59.el9_5.noar 44/208 Running scriptlet: crypto-policies-20240828-2.git626aa59.el9_5.noar 44/208 Installing : mpfr-4.1.0-7.el9.x86_64 45/208 Installing : dwz-0.14-3.el9.x86_64 46/208 Installing : unzip-6.0-57.el9.x86_64 47/208 Installing : file-libs-5.39-16.el9.x86_64 48/208 Installing : file-5.39-16.el9.x86_64 49/208 Installing : sqlite-libs-3.34.1-7.el9_3.x86_64 50/208 Installing : libcap-ng-0.8.2-7.el9.x86_64 51/208 Installing : audit-libs-3.1.5-1.el9.x86_64 52/208 Installing : libsigsegv-2.13-4.el9.x86_64 53/208 Installing : gawk-5.1.0-6.el9.x86_64 54/208 Installing : libunistring-0.9.10-15.el9.x86_64 55/208 Installing : libidn2-2.3.0-7.el9.x86_64 56/208 Installing : libtasn1-4.16.0-8.el9_1.x86_64 57/208 Installing : lua-libs-5.4.4-4.el9.x86_64 58/208 Installing : libsepol-3.6-1.el9.x86_64 59/208 Installing : alternatives-1.24-1.el9_5.1.x86_64 60/208 Installing : libsmartcols-2.37.4-20.el9.x86_64 61/208 Installing : zip-3.0-35.el9.x86_64 62/208 Installing : zstd-1.5.1-2.el9.x86_64 63/208 Running scriptlet: groff-base-1.22.4-10.el9.x86_64 64/208 Installing : groff-base-1.22.4-10.el9.x86_64 64/208 Running scriptlet: groff-base-1.22.4-10.el9.x86_64 64/208 Installing : bzip2-1.0.8-10.el9_5.x86_64 65/208 Installing : libxml2-2.9.13-6.el9_5.1.x86_64 66/208 Installing : info-6.7-15.el9.x86_64 67/208 Installing : ed-1.14.2-12.el9.x86_64 68/208 Installing : cpio-2.13-16.el9.x86_64 69/208 Installing : libgpg-error-1.42-5.el9.x86_64 70/208 Installing : libgcrypt-1.10.0-11.el9.x86_64 71/208 Installing : libverto-0.3.2-3.el9.x86_64 72/208 Installing : diffutils-3.7-12.el9.x86_64 73/208 Installing : keyutils-libs-1.6.3-1.el9.x86_64 74/208 Installing : libpkgconf-1.7.3-10.el9.x86_64 75/208 Installing : pkgconf-1.7.3-10.el9.x86_64 76/208 Installing : libffi-3.4.2-8.el9.x86_64 77/208 Installing : p11-kit-0.25.3-3.el9_5.x86_64 78/208 Installing : p11-kit-trust-0.25.3-3.el9_5.x86_64 79/208 Running scriptlet: p11-kit-trust-0.25.3-3.el9_5.x86_64 79/208 Installing : ncurses-6.2-10.20210508.el9.x86_64 80/208 Installing : gdbm-libs-1:1.23-1.el9.x86_64 81/208 Installing : libdb-5.3.28-54.el9.x86_64 82/208 Installing : libeconf-0.4.1-4.el9.x86_64 83/208 Installing : libnghttp2-1.43.0-6.el9.x86_64 84/208 Installing : pcre-8.44-4.el9.x86_64 85/208 Installing : grep-3.6-5.el9.x86_64 86/208 Installing : xz-5.2.5-8.el9_0.x86_64 87/208 Installing : pcre2-10.40-6.el9.x86_64 88/208 Installing : libselinux-3.6-1.el9.x86_64 89/208 Installing : sed-4.8-9.el9.x86_64 90/208 Installing : findutils-1:4.8.0-7.el9.x86_64 91/208 Installing : openssl-fips-provider-so-3.0.7-6.el9_5.x86_64 92/208 Installing : openssl-fips-provider-3.0.7-6.el9_5.x86_64 93/208 Installing : openssl-libs-1:3.2.2-6.el9_5.1.x86_64 94/208 Installing : coreutils-8.32-36.el9.x86_64 95/208 Running scriptlet: ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.no 96/208 Installing : ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.no 96/208 Running scriptlet: ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.no 96/208 Installing : libblkid-2.37.4-20.el9.x86_64 97/208 Running scriptlet: libblkid-2.37.4-20.el9.x86_64 97/208 Installing : krb5-libs-1.21.1-4.el9_5.x86_64 98/208 Installing : libmount-2.37.4-20.el9.x86_64 99/208 Installing : gzip-1.12-1.el9.x86_64 100/208 Installing : cracklib-2.9.6-27.el9.x86_64 101/208 Installing : systemd-libs-252-46.el9_5.2.x86_64 102/208 Running scriptlet: systemd-libs-252-46.el9_5.2.x86_64 102/208 Installing : libarchive-3.5.3-4.el9.x86_64 103/208 Installing : util-linux-core-2.37.4-20.el9.x86_64 104/208 Running scriptlet: util-linux-core-2.37.4-20.el9.x86_64 104/208 Installing : cracklib-dicts-2.9.6-27.el9.x86_64 105/208 Installing : cyrus-sasl-lib-2.1.27-21.el9.x86_64 106/208 Installing : libssh-0.10.4-13.el9.x86_64 107/208 Installing : libfdisk-2.37.4-20.el9.x86_64 108/208 Installing : perl-Digest-1.19-4.el9.noarch 109/208 Installing : perl-Digest-MD5-2.58-4.el9.x86_64 110/208 Installing : perl-B-1.80-481.el9.x86_64 111/208 Installing : perl-FileHandle-2.03-481.el9.noarch 112/208 Installing : perl-Data-Dumper-2.174-462.el9.x86_64 113/208 Installing : perl-libnet-3.13-4.el9.noarch 114/208 Installing : perl-AutoLoader-5.74-481.el9.noarch 115/208 Installing : perl-base-2.27-481.el9.noarch 116/208 Installing : perl-URI-5.09-3.el9.noarch 117/208 Installing : perl-Time-Local-2:1.300-7.el9.noarch 118/208 Installing : perl-if-0.60.800-481.el9.noarch 119/208 Installing : perl-Mozilla-CA-20200520-6.el9.noarch 120/208 Installing : perl-Pod-Escapes-1:1.07-460.el9.noarch 121/208 Installing : perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch 122/208 Installing : perl-File-Path-2.18-4.el9.noarch 123/208 Installing : perl-IO-Socket-IP-0.41-5.el9.noarch 124/208 Installing : perl-Net-SSLeay-1.94-1.el9.x86_64 125/208 Installing : perl-IO-Socket-SSL-2.073-2.el9.noarch 126/208 Installing : perl-Term-ANSIColor-5.01-461.el9.noarch 127/208 Installing : perl-Class-Struct-0.66-481.el9.noarch 128/208 Installing : perl-Term-Cap-1.17-460.el9.noarch 129/208 Installing : perl-File-Temp-1:0.231.100-4.el9.noarch 130/208 Installing : perl-IPC-Open3-1.21-481.el9.noarch 131/208 Installing : perl-POSIX-1.94-481.el9.x86_64 132/208 Installing : perl-subs-1.03-481.el9.noarch 133/208 Installing : perl-Pod-Simple-1:3.42-4.el9.noarch 134/208 Installing : perl-HTTP-Tiny-0.076-462.el9.noarch 135/208 Installing : perl-Socket-4:2.031-4.el9.x86_64 136/208 Installing : perl-SelectSaver-1.02-481.el9.noarch 137/208 Installing : perl-Symbol-1.08-481.el9.noarch 138/208 Installing : perl-podlators-1:4.14-460.el9.noarch 139/208 Installing : perl-Pod-Perldoc-3.28.01-461.el9.noarch 140/208 Installing : perl-File-stat-1.09-481.el9.noarch 141/208 Installing : perl-Text-ParseWords-3.30-460.el9.noarch 142/208 Installing : perl-overloading-0.02-481.el9.noarch 143/208 Installing : perl-mro-1.23-481.el9.x86_64 144/208 Installing : perl-Fcntl-1.13-481.el9.x86_64 145/208 Installing : perl-Pod-Usage-4:2.01-4.el9.noarch 146/208 Installing : perl-IO-1.43-481.el9.x86_64 147/208 Installing : perl-parent-1:0.238-460.el9.noarch 148/208 Installing : perl-MIME-Base64-3.16-4.el9.x86_64 149/208 Installing : perl-constant-1.33-461.el9.noarch 150/208 Installing : perl-Errno-1.30-481.el9.x86_64 151/208 Installing : perl-File-Basename-2.85-481.el9.noarch 152/208 Installing : perl-Getopt-Std-1.12-481.el9.noarch 153/208 Installing : perl-vars-1.05-481.el9.noarch 154/208 Installing : perl-Storable-1:3.21-460.el9.x86_64 155/208 Installing : perl-Getopt-Long-1:2.52-4.el9.noarch 156/208 Installing : perl-overload-1.31-481.el9.noarch 157/208 Installing : perl-Scalar-List-Utils-4:1.56-462.el9.x86_64 158/208 Installing : perl-Exporter-5.74-461.el9.noarch 159/208 Installing : perl-Carp-1.50-460.el9.noarch 160/208 Installing : perl-PathTools-3.78-461.el9.x86_64 161/208 Installing : perl-Encode-4:3.08-462.el9.x86_64 162/208 Installing : perl-libs-4:5.32.1-481.el9.x86_64 163/208 Installing : perl-interpreter-4:5.32.1-481.el9.x86_64 164/208 Installing : kernel-srpm-macros-1.0-13.el9.noarch 165/208 Installing : openssl-1:3.2.2-6.el9_5.1.x86_64 166/208 Installing : libpwquality-1.4.4-8.el9.x86_64 167/208 Installing : pam-1.5.1-22.el9_5.x86_64 168/208 Installing : libevent-2.1.12-8.el9_4.x86_64 169/208 Installing : tar-2:1.34-7.el9.x86_64 170/208 Installing : libsemanage-3.6-2.1.el9_5.x86_64 171/208 Installing : shadow-utils-2:4.9-10.el9_5.x86_64 172/208 Running scriptlet: libutempter-1.2.1-6.el9.x86_64 173/208 Installing : libutempter-1.2.1-6.el9.x86_64 173/208 Installing : openldap-2.6.6-3.el9.x86_64 174/208 Installing : patch-2.7.6-16.el9.x86_64 175/208 Installing : libbrotli-1.0.9-7.el9_5.x86_64 176/208 Installing : libgomp-11.5.0-5.el9_5.x86_64 177/208 Installing : elfutils-default-yama-scope-0.191-4.el9.noarch 178/208 Running scriptlet: elfutils-default-yama-scope-0.191-4.el9.noarch 178/208 Installing : elfutils-libs-0.191-4.el9.x86_64 179/208 Installing : pkgconf-m4-1.7.3-10.el9.noarch 180/208 Installing : pkgconf-pkg-config-1.7.3-10.el9.x86_64 181/208 Installing : publicsuffix-list-dafsa-20210518-3.el9.noarch 182/208 Installing : libpsl-0.21.1-5.el9.x86_64 183/208 Installing : libcurl-7.76.1-31.el9.x86_64 184/208 Installing : elfutils-debuginfod-client-0.191-4.el9.x86_64 185/208 Installing : binutils-gold-2.35.2-54.el9.x86_64 186/208 Installing : binutils-2.35.2-54.el9.x86_64 187/208 Running scriptlet: binutils-2.35.2-54.el9.x86_64 187/208 Installing : elfutils-0.191-4.el9.x86_64 188/208 Installing : gdb-minimal-14.2-3.el9.x86_64 189/208 Installing : debugedit-5.0-5.el9.x86_64 190/208 Installing : curl-7.76.1-31.el9.x86_64 191/208 Installing : rpm-libs-4.16.1.3-34.el9.x86_64 192/208 Installing : rpm-4.16.1.3-34.el9.x86_64 193/208 Installing : lua-srpm-macros-1-6.el9.noarch 194/208 Installing : efi-srpm-macros-6-2.el9_0.noarch 195/208 Installing : rpmautospec-rpm-macros-0.7.3-1.el9.noarch 196/208 Installing : rpm-build-libs-4.16.1.3-34.el9.x86_64 197/208 Installing : fonts-srpm-macros-1:2.0.5-7.el9.1.noarch 198/208 Installing : go-srpm-macros-3.6.0-3.el9.noarch 199/208 Installing : python-srpm-macros-3.9-54.el9.noarch 200/208 Installing : redhat-rpm-config-208-1.el9.noarch 201/208 Installing : rpm-build-4.16.1.3-34.el9.x86_64 202/208 Installing : pyproject-srpm-macros-1.12.0-1.el9.noarch 203/208 Installing : forge-srpm-macros-0.4.0-1.el9.noarch 204/208 Installing : go-srpm-macros-epel-3.6.0.1-1.el9.noarch 205/208 Installing : epel-rpm-macros-9-15.el9.noarch 206/208 Installing : util-linux-2.37.4-20.el9.x86_64 207/208 Installing : which-2.21-29.el9.x86_64 208/208 Running scriptlet: filesystem-3.16-5.el9.x86_64 208/208 Running scriptlet: ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.no 208/208 Running scriptlet: rpm-4.16.1.3-34.el9.x86_64 208/208 Running scriptlet: which-2.21-29.el9.x86_64 208/208 Verifying : basesystem-11-13.el9.noarch 1/208 Verifying : cpio-2.13-16.el9.x86_64 2/208 Verifying : cracklib-2.9.6-27.el9.x86_64 3/208 Verifying : ed-1.14.2-12.el9.x86_64 4/208 Verifying : grep-3.6-5.el9.x86_64 5/208 Verifying : groff-base-1.22.4-10.el9.x86_64 6/208 Verifying : info-6.7-15.el9.x86_64 7/208 Verifying : libattr-2.5.1-3.el9.x86_64 8/208 Verifying : libcap-ng-0.8.2-7.el9.x86_64 9/208 Verifying : libgpg-error-1.42-5.el9.x86_64 10/208 Verifying : libidn2-2.3.0-7.el9.x86_64 11/208 Verifying : libpsl-0.21.1-5.el9.x86_64 12/208 Verifying : libpwquality-1.4.4-8.el9.x86_64 13/208 Verifying : libsigsegv-2.13-4.el9.x86_64 14/208 Verifying : libunistring-0.9.10-15.el9.x86_64 15/208 Verifying : libutempter-1.2.1-6.el9.x86_64 16/208 Verifying : libverto-0.3.2-3.el9.x86_64 17/208 Verifying : libxcrypt-4.4.18-3.el9.x86_64 18/208 Verifying : libzstd-1.5.1-2.el9.x86_64 19/208 Verifying : lz4-libs-1.9.3-5.el9.x86_64 20/208 Verifying : mpfr-4.1.0-7.el9.x86_64 21/208 Verifying : popt-1.18-8.el9.x86_64 22/208 Verifying : publicsuffix-list-dafsa-20210518-3.el9.noarch 23/208 Verifying : readline-8.1-4.el9.x86_64 24/208 Verifying : sed-4.8-9.el9.x86_64 25/208 Verifying : cracklib-dicts-2.9.6-27.el9.x86_64 26/208 Verifying : diffutils-3.7-12.el9.x86_64 27/208 Verifying : gawk-5.1.0-6.el9.x86_64 28/208 Verifying : xz-5.2.5-8.el9_0.x86_64 29/208 Verifying : xz-libs-5.2.5-8.el9_0.x86_64 30/208 Verifying : zstd-1.5.1-2.el9.x86_64 31/208 Verifying : cyrus-sasl-lib-2.1.27-21.el9.x86_64 32/208 Verifying : gzip-1.12-1.el9.x86_64 33/208 Verifying : libtasn1-4.16.0-8.el9_1.x86_64 34/208 Verifying : keyutils-libs-1.6.3-1.el9.x86_64 35/208 Verifying : libarchive-3.5.3-4.el9.x86_64 36/208 Verifying : libcap-2.48-9.el9_2.x86_64 37/208 Verifying : libpkgconf-1.7.3-10.el9.x86_64 38/208 Verifying : pkgconf-1.7.3-10.el9.x86_64 39/208 Verifying : pkgconf-m4-1.7.3-10.el9.noarch 40/208 Verifying : pkgconf-pkg-config-1.7.3-10.el9.x86_64 41/208 Verifying : zip-3.0-35.el9.x86_64 42/208 Verifying : gmp-1:6.2.0-13.el9.x86_64 43/208 Verifying : libffi-3.4.2-8.el9.x86_64 44/208 Verifying : lua-libs-5.4.4-4.el9.x86_64 45/208 Verifying : ncurses-6.2-10.20210508.el9.x86_64 46/208 Verifying : ncurses-base-6.2-10.20210508.el9.noarch 47/208 Verifying : ncurses-libs-6.2-10.20210508.el9.x86_64 48/208 Verifying : which-2.21-29.el9.x86_64 49/208 Verifying : zlib-1.2.11-40.el9.x86_64 50/208 Verifying : bash-5.1.8-9.el9.x86_64 51/208 Verifying : file-5.39-16.el9.x86_64 52/208 Verifying : file-libs-5.39-16.el9.x86_64 53/208 Verifying : libacl-2.3.1-4.el9.x86_64 54/208 Verifying : libselinux-3.6-1.el9.x86_64 55/208 Verifying : libsepol-3.6-1.el9.x86_64 56/208 Verifying : openldap-2.6.6-3.el9.x86_64 57/208 Verifying : setup-2.13.7-10.el9.noarch 58/208 Verifying : sqlite-libs-3.34.1-7.el9_3.x86_64 59/208 Verifying : ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.no 60/208 Verifying : libcom_err-1.46.5-5.el9.x86_64 61/208 Verifying : libevent-2.1.12-8.el9_4.x86_64 62/208 Verifying : libssh-0.10.4-13.el9.x86_64 63/208 Verifying : libssh-config-0.10.4-13.el9.noarch 64/208 Verifying : alternatives-1.24-1.el9_5.1.x86_64 65/208 Verifying : audit-libs-3.1.5-1.el9.x86_64 66/208 Verifying : binutils-2.35.2-54.el9.x86_64 67/208 Verifying : binutils-gold-2.35.2-54.el9.x86_64 68/208 Verifying : coreutils-8.32-36.el9.x86_64 69/208 Verifying : coreutils-common-8.32-36.el9.x86_64 70/208 Verifying : crypto-policies-20240828-2.git626aa59.el9_5.noar 71/208 Verifying : curl-7.76.1-31.el9.x86_64 72/208 Verifying : elfutils-0.191-4.el9.x86_64 73/208 Verifying : elfutils-debuginfod-client-0.191-4.el9.x86_64 74/208 Verifying : elfutils-default-yama-scope-0.191-4.el9.noarch 75/208 Verifying : elfutils-libelf-0.191-4.el9.x86_64 76/208 Verifying : elfutils-libs-0.191-4.el9.x86_64 77/208 Verifying : filesystem-3.16-5.el9.x86_64 78/208 Verifying : findutils-1:4.8.0-7.el9.x86_64 79/208 Verifying : gdbm-libs-1:1.23-1.el9.x86_64 80/208 Verifying : glibc-2.34-125.el9_5.1.x86_64 81/208 Verifying : glibc-common-2.34-125.el9_5.1.x86_64 82/208 Verifying : glibc-gconv-extra-2.34-125.el9_5.1.x86_64 83/208 Verifying : glibc-minimal-langpack-2.34-125.el9_5.1.x86_64 84/208 Verifying : libblkid-2.37.4-20.el9.x86_64 85/208 Verifying : libcurl-7.76.1-31.el9.x86_64 86/208 Verifying : libdb-5.3.28-54.el9.x86_64 87/208 Verifying : libeconf-0.4.1-4.el9.x86_64 88/208 Verifying : libfdisk-2.37.4-20.el9.x86_64 89/208 Verifying : libgcrypt-1.10.0-11.el9.x86_64 90/208 Verifying : libmount-2.37.4-20.el9.x86_64 91/208 Verifying : libnghttp2-1.43.0-6.el9.x86_64 92/208 Verifying : libsmartcols-2.37.4-20.el9.x86_64 93/208 Verifying : libuuid-2.37.4-20.el9.x86_64 94/208 Verifying : openssl-fips-provider-3.0.7-6.el9_5.x86_64 95/208 Verifying : openssl-fips-provider-so-3.0.7-6.el9_5.x86_64 96/208 Verifying : pcre-8.44-4.el9.x86_64 97/208 Verifying : pcre2-10.40-6.el9.x86_64 98/208 Verifying : pcre2-syntax-10.40-6.el9.noarch 99/208 Verifying : redhat-release-9.5-0.6.el9.x86_64 100/208 Verifying : rpm-4.16.1.3-34.el9.x86_64 101/208 Verifying : rpm-build-libs-4.16.1.3-34.el9.x86_64 102/208 Verifying : rpm-libs-4.16.1.3-34.el9.x86_64 103/208 Verifying : systemd-libs-252-46.el9_5.2.x86_64 104/208 Verifying : tar-2:1.34-7.el9.x86_64 105/208 Verifying : unzip-6.0-57.el9.x86_64 106/208 Verifying : util-linux-2.37.4-20.el9.x86_64 107/208 Verifying : util-linux-core-2.37.4-20.el9.x86_64 108/208 Verifying : bzip2-1.0.8-10.el9_5.x86_64 109/208 Verifying : bzip2-libs-1.0.8-10.el9_5.x86_64 110/208 Verifying : krb5-libs-1.21.1-4.el9_5.x86_64 111/208 Verifying : libbrotli-1.0.9-7.el9_5.x86_64 112/208 Verifying : libgcc-11.5.0-5.el9_5.x86_64 113/208 Verifying : libgomp-11.5.0-5.el9_5.x86_64 114/208 Verifying : libsemanage-3.6-2.1.el9_5.x86_64 115/208 Verifying : libstdc++-11.5.0-5.el9_5.x86_64 116/208 Verifying : libxml2-2.9.13-6.el9_5.1.x86_64 117/208 Verifying : openssl-1:3.2.2-6.el9_5.1.x86_64 118/208 Verifying : openssl-libs-1:3.2.2-6.el9_5.1.x86_64 119/208 Verifying : p11-kit-0.25.3-3.el9_5.x86_64 120/208 Verifying : p11-kit-trust-0.25.3-3.el9_5.x86_64 121/208 Verifying : pam-1.5.1-22.el9_5.x86_64 122/208 Verifying : shadow-utils-2:4.9-10.el9_5.x86_64 123/208 Verifying : tzdata-2025a-1.el9.noarch 124/208 Verifying : ghc-srpm-macros-1.5.0-6.el9.noarch 125/208 Verifying : lua-srpm-macros-1-6.el9.noarch 126/208 Verifying : perl-Data-Dumper-2.174-462.el9.x86_64 127/208 Verifying : perl-Digest-MD5-2.58-4.el9.x86_64 128/208 Verifying : perl-Encode-4:3.08-462.el9.x86_64 129/208 Verifying : perl-Mozilla-CA-20200520-6.el9.noarch 130/208 Verifying : perl-Pod-Simple-1:3.42-4.el9.noarch 131/208 Verifying : perl-Term-Cap-1.17-460.el9.noarch 132/208 Verifying : perl-Time-Local-2:1.300-7.el9.noarch 133/208 Verifying : rust-srpm-macros-17-4.el9.noarch 134/208 Verifying : perl-Pod-Escapes-1:1.07-460.el9.noarch 135/208 Verifying : perl-Storable-1:3.21-460.el9.x86_64 136/208 Verifying : perl-podlators-1:4.14-460.el9.noarch 137/208 Verifying : dwz-0.14-3.el9.x86_64 138/208 Verifying : perl-Exporter-5.74-461.el9.noarch 139/208 Verifying : perl-File-Temp-1:0.231.100-4.el9.noarch 140/208 Verifying : perl-Getopt-Long-1:2.52-4.el9.noarch 141/208 Verifying : perl-MIME-Base64-3.16-4.el9.x86_64 142/208 Verifying : perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch 143/208 Verifying : perl-parent-1:0.238-460.el9.noarch 144/208 Verifying : perl-srpm-macros-1-41.el9.noarch 145/208 Verifying : efi-srpm-macros-6-2.el9_0.noarch 146/208 Verifying : openblas-srpm-macros-2-11.el9.noarch 147/208 Verifying : perl-Digest-1.19-4.el9.noarch 148/208 Verifying : perl-File-Path-2.18-4.el9.noarch 149/208 Verifying : perl-IO-Socket-IP-0.41-5.el9.noarch 150/208 Verifying : perl-Pod-Usage-4:2.01-4.el9.noarch 151/208 Verifying : perl-Socket-4:2.031-4.el9.x86_64 152/208 Verifying : perl-Term-ANSIColor-5.01-461.el9.noarch 153/208 Verifying : perl-Text-ParseWords-3.30-460.el9.noarch 154/208 Verifying : perl-URI-5.09-3.el9.noarch 155/208 Verifying : perl-constant-1.33-461.el9.noarch 156/208 Verifying : fonts-srpm-macros-1:2.0.5-7.el9.1.noarch 157/208 Verifying : ocaml-srpm-macros-6-6.el9.noarch 158/208 Verifying : patch-2.7.6-16.el9.x86_64 159/208 Verifying : perl-Pod-Perldoc-3.28.01-461.el9.noarch 160/208 Verifying : perl-libnet-3.13-4.el9.noarch 161/208 Verifying : perl-Carp-1.50-460.el9.noarch 162/208 Verifying : perl-PathTools-3.78-461.el9.x86_64 163/208 Verifying : kernel-srpm-macros-1.0-13.el9.noarch 164/208 Verifying : qt5-srpm-macros-5.15.9-1.el9.noarch 165/208 Verifying : perl-AutoLoader-5.74-481.el9.noarch 166/208 Verifying : perl-SelectSaver-1.02-481.el9.noarch 167/208 Verifying : perl-if-0.60.800-481.el9.noarch 168/208 Verifying : perl-overloading-0.02-481.el9.noarch 169/208 Verifying : perl-B-1.80-481.el9.x86_64 170/208 Verifying : perl-Class-Struct-0.66-481.el9.noarch 171/208 Verifying : perl-Errno-1.30-481.el9.x86_64 172/208 Verifying : perl-File-Basename-2.85-481.el9.noarch 173/208 Verifying : perl-File-stat-1.09-481.el9.noarch 174/208 Verifying : perl-FileHandle-2.03-481.el9.noarch 175/208 Verifying : perl-Getopt-Std-1.12-481.el9.noarch 176/208 Verifying : perl-HTTP-Tiny-0.076-462.el9.noarch 177/208 Verifying : perl-IO-1.43-481.el9.x86_64 178/208 Verifying : perl-IPC-Open3-1.21-481.el9.noarch 179/208 Verifying : perl-POSIX-1.94-481.el9.x86_64 180/208 Verifying : perl-Symbol-1.08-481.el9.noarch 181/208 Verifying : perl-base-2.27-481.el9.noarch 182/208 Verifying : perl-interpreter-4:5.32.1-481.el9.x86_64 183/208 Verifying : perl-libs-4:5.32.1-481.el9.x86_64 184/208 Verifying : perl-mro-1.23-481.el9.x86_64 185/208 Verifying : perl-overload-1.31-481.el9.noarch 186/208 Verifying : perl-vars-1.05-481.el9.noarch 187/208 Verifying : pyproject-srpm-macros-1.12.0-1.el9.noarch 188/208 Verifying : debugedit-5.0-5.el9.x86_64 189/208 Verifying : perl-Fcntl-1.13-481.el9.x86_64 190/208 Verifying : perl-subs-1.03-481.el9.noarch 191/208 Verifying : go-srpm-macros-3.6.0-3.el9.noarch 192/208 Verifying : python-srpm-macros-3.9-54.el9.noarch 193/208 Verifying : redhat-rpm-config-208-1.el9.noarch 194/208 Verifying : gdb-minimal-14.2-3.el9.x86_64 195/208 Verifying : perl-IO-Socket-SSL-2.073-2.el9.noarch 196/208 Verifying : perl-Net-SSLeay-1.94-1.el9.x86_64 197/208 Verifying : perl-Scalar-List-Utils-4:1.56-462.el9.x86_64 198/208 Verifying : rpm-build-4.16.1.3-34.el9.x86_64 199/208 Verifying : ansible-srpm-macros-1-16.el9.noarch 200/208 Verifying : epel-rpm-macros-9-15.el9.noarch 201/208 Verifying : forge-srpm-macros-0.4.0-1.el9.noarch 202/208 Verifying : fpc-srpm-macros-1.3-7.el9.noarch 203/208 Verifying : ghc-srpm-macros-epel-1.9.1-1.el9.noarch 204/208 Verifying : go-srpm-macros-epel-3.6.0.1-1.el9.noarch 205/208 Verifying : qt6-srpm-macros-6.6.2-1.el9.noarch 206/208 Verifying : rpmautospec-rpm-macros-0.7.3-1.el9.noarch 207/208 Verifying : rust-srpm-macros-epel-26.3-1.el9.noarch 208/208 Installed products updated. Installed: alternatives-1.24-1.el9_5.1.x86_64 ansible-srpm-macros-1-16.el9.noarch audit-libs-3.1.5-1.el9.x86_64 basesystem-11-13.el9.noarch bash-5.1.8-9.el9.x86_64 binutils-2.35.2-54.el9.x86_64 binutils-gold-2.35.2-54.el9.x86_64 bzip2-1.0.8-10.el9_5.x86_64 bzip2-libs-1.0.8-10.el9_5.x86_64 ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch coreutils-8.32-36.el9.x86_64 coreutils-common-8.32-36.el9.x86_64 cpio-2.13-16.el9.x86_64 cracklib-2.9.6-27.el9.x86_64 cracklib-dicts-2.9.6-27.el9.x86_64 crypto-policies-20240828-2.git626aa59.el9_5.noarch curl-7.76.1-31.el9.x86_64 cyrus-sasl-lib-2.1.27-21.el9.x86_64 debugedit-5.0-5.el9.x86_64 diffutils-3.7-12.el9.x86_64 dwz-0.14-3.el9.x86_64 ed-1.14.2-12.el9.x86_64 efi-srpm-macros-6-2.el9_0.noarch elfutils-0.191-4.el9.x86_64 elfutils-debuginfod-client-0.191-4.el9.x86_64 elfutils-default-yama-scope-0.191-4.el9.noarch elfutils-libelf-0.191-4.el9.x86_64 elfutils-libs-0.191-4.el9.x86_64 epel-rpm-macros-9-15.el9.noarch file-5.39-16.el9.x86_64 file-libs-5.39-16.el9.x86_64 filesystem-3.16-5.el9.x86_64 findutils-1:4.8.0-7.el9.x86_64 fonts-srpm-macros-1:2.0.5-7.el9.1.noarch forge-srpm-macros-0.4.0-1.el9.noarch fpc-srpm-macros-1.3-7.el9.noarch gawk-5.1.0-6.el9.x86_64 gdb-minimal-14.2-3.el9.x86_64 gdbm-libs-1:1.23-1.el9.x86_64 ghc-srpm-macros-1.5.0-6.el9.noarch ghc-srpm-macros-epel-1.9.1-1.el9.noarch glibc-2.34-125.el9_5.1.x86_64 glibc-common-2.34-125.el9_5.1.x86_64 glibc-gconv-extra-2.34-125.el9_5.1.x86_64 glibc-minimal-langpack-2.34-125.el9_5.1.x86_64 gmp-1:6.2.0-13.el9.x86_64 go-srpm-macros-3.6.0-3.el9.noarch go-srpm-macros-epel-3.6.0.1-1.el9.noarch grep-3.6-5.el9.x86_64 groff-base-1.22.4-10.el9.x86_64 gzip-1.12-1.el9.x86_64 info-6.7-15.el9.x86_64 kernel-srpm-macros-1.0-13.el9.noarch keyutils-libs-1.6.3-1.el9.x86_64 krb5-libs-1.21.1-4.el9_5.x86_64 libacl-2.3.1-4.el9.x86_64 libarchive-3.5.3-4.el9.x86_64 libattr-2.5.1-3.el9.x86_64 libblkid-2.37.4-20.el9.x86_64 libbrotli-1.0.9-7.el9_5.x86_64 libcap-2.48-9.el9_2.x86_64 libcap-ng-0.8.2-7.el9.x86_64 libcom_err-1.46.5-5.el9.x86_64 libcurl-7.76.1-31.el9.x86_64 libdb-5.3.28-54.el9.x86_64 libeconf-0.4.1-4.el9.x86_64 libevent-2.1.12-8.el9_4.x86_64 libfdisk-2.37.4-20.el9.x86_64 libffi-3.4.2-8.el9.x86_64 libgcc-11.5.0-5.el9_5.x86_64 libgcrypt-1.10.0-11.el9.x86_64 libgomp-11.5.0-5.el9_5.x86_64 libgpg-error-1.42-5.el9.x86_64 libidn2-2.3.0-7.el9.x86_64 libmount-2.37.4-20.el9.x86_64 libnghttp2-1.43.0-6.el9.x86_64 libpkgconf-1.7.3-10.el9.x86_64 libpsl-0.21.1-5.el9.x86_64 libpwquality-1.4.4-8.el9.x86_64 libselinux-3.6-1.el9.x86_64 libsemanage-3.6-2.1.el9_5.x86_64 libsepol-3.6-1.el9.x86_64 libsigsegv-2.13-4.el9.x86_64 libsmartcols-2.37.4-20.el9.x86_64 libssh-0.10.4-13.el9.x86_64 libssh-config-0.10.4-13.el9.noarch libstdc++-11.5.0-5.el9_5.x86_64 libtasn1-4.16.0-8.el9_1.x86_64 libunistring-0.9.10-15.el9.x86_64 libutempter-1.2.1-6.el9.x86_64 libuuid-2.37.4-20.el9.x86_64 libverto-0.3.2-3.el9.x86_64 libxcrypt-4.4.18-3.el9.x86_64 libxml2-2.9.13-6.el9_5.1.x86_64 libzstd-1.5.1-2.el9.x86_64 lua-libs-5.4.4-4.el9.x86_64 lua-srpm-macros-1-6.el9.noarch lz4-libs-1.9.3-5.el9.x86_64 mpfr-4.1.0-7.el9.x86_64 ncurses-6.2-10.20210508.el9.x86_64 ncurses-base-6.2-10.20210508.el9.noarch ncurses-libs-6.2-10.20210508.el9.x86_64 ocaml-srpm-macros-6-6.el9.noarch openblas-srpm-macros-2-11.el9.noarch openldap-2.6.6-3.el9.x86_64 openssl-1:3.2.2-6.el9_5.1.x86_64 openssl-fips-provider-3.0.7-6.el9_5.x86_64 openssl-fips-provider-so-3.0.7-6.el9_5.x86_64 openssl-libs-1:3.2.2-6.el9_5.1.x86_64 p11-kit-0.25.3-3.el9_5.x86_64 p11-kit-trust-0.25.3-3.el9_5.x86_64 pam-1.5.1-22.el9_5.x86_64 patch-2.7.6-16.el9.x86_64 pcre-8.44-4.el9.x86_64 pcre2-10.40-6.el9.x86_64 pcre2-syntax-10.40-6.el9.noarch perl-AutoLoader-5.74-481.el9.noarch perl-B-1.80-481.el9.x86_64 perl-Carp-1.50-460.el9.noarch perl-Class-Struct-0.66-481.el9.noarch perl-Data-Dumper-2.174-462.el9.x86_64 perl-Digest-1.19-4.el9.noarch perl-Digest-MD5-2.58-4.el9.x86_64 perl-Encode-4:3.08-462.el9.x86_64 perl-Errno-1.30-481.el9.x86_64 perl-Exporter-5.74-461.el9.noarch perl-Fcntl-1.13-481.el9.x86_64 perl-File-Basename-2.85-481.el9.noarch perl-File-Path-2.18-4.el9.noarch perl-File-Temp-1:0.231.100-4.el9.noarch perl-File-stat-1.09-481.el9.noarch perl-FileHandle-2.03-481.el9.noarch perl-Getopt-Long-1:2.52-4.el9.noarch perl-Getopt-Std-1.12-481.el9.noarch perl-HTTP-Tiny-0.076-462.el9.noarch perl-IO-1.43-481.el9.x86_64 perl-IO-Socket-IP-0.41-5.el9.noarch perl-IO-Socket-SSL-2.073-2.el9.noarch perl-IPC-Open3-1.21-481.el9.noarch perl-MIME-Base64-3.16-4.el9.x86_64 perl-Mozilla-CA-20200520-6.el9.noarch perl-Net-SSLeay-1.94-1.el9.x86_64 perl-POSIX-1.94-481.el9.x86_64 perl-PathTools-3.78-461.el9.x86_64 perl-Pod-Escapes-1:1.07-460.el9.noarch perl-Pod-Perldoc-3.28.01-461.el9.noarch perl-Pod-Simple-1:3.42-4.el9.noarch perl-Pod-Usage-4:2.01-4.el9.noarch perl-Scalar-List-Utils-4:1.56-462.el9.x86_64 perl-SelectSaver-1.02-481.el9.noarch perl-Socket-4:2.031-4.el9.x86_64 perl-Storable-1:3.21-460.el9.x86_64 perl-Symbol-1.08-481.el9.noarch perl-Term-ANSIColor-5.01-461.el9.noarch perl-Term-Cap-1.17-460.el9.noarch perl-Text-ParseWords-3.30-460.el9.noarch perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch perl-Time-Local-2:1.300-7.el9.noarch perl-URI-5.09-3.el9.noarch perl-base-2.27-481.el9.noarch perl-constant-1.33-461.el9.noarch perl-if-0.60.800-481.el9.noarch perl-interpreter-4:5.32.1-481.el9.x86_64 perl-libnet-3.13-4.el9.noarch perl-libs-4:5.32.1-481.el9.x86_64 perl-mro-1.23-481.el9.x86_64 perl-overload-1.31-481.el9.noarch perl-overloading-0.02-481.el9.noarch perl-parent-1:0.238-460.el9.noarch perl-podlators-1:4.14-460.el9.noarch perl-srpm-macros-1-41.el9.noarch perl-subs-1.03-481.el9.noarch perl-vars-1.05-481.el9.noarch pkgconf-1.7.3-10.el9.x86_64 pkgconf-m4-1.7.3-10.el9.noarch pkgconf-pkg-config-1.7.3-10.el9.x86_64 popt-1.18-8.el9.x86_64 publicsuffix-list-dafsa-20210518-3.el9.noarch pyproject-srpm-macros-1.12.0-1.el9.noarch python-srpm-macros-3.9-54.el9.noarch qt5-srpm-macros-5.15.9-1.el9.noarch qt6-srpm-macros-6.6.2-1.el9.noarch readline-8.1-4.el9.x86_64 redhat-release-9.5-0.6.el9.x86_64 redhat-rpm-config-208-1.el9.noarch rpm-4.16.1.3-34.el9.x86_64 rpm-build-4.16.1.3-34.el9.x86_64 rpm-build-libs-4.16.1.3-34.el9.x86_64 rpm-libs-4.16.1.3-34.el9.x86_64 rpmautospec-rpm-macros-0.7.3-1.el9.noarch rust-srpm-macros-17-4.el9.noarch rust-srpm-macros-epel-26.3-1.el9.noarch sed-4.8-9.el9.x86_64 setup-2.13.7-10.el9.noarch shadow-utils-2:4.9-10.el9_5.x86_64 sqlite-libs-3.34.1-7.el9_3.x86_64 systemd-libs-252-46.el9_5.2.x86_64 tar-2:1.34-7.el9.x86_64 tzdata-2025a-1.el9.noarch unzip-6.0-57.el9.x86_64 util-linux-2.37.4-20.el9.x86_64 util-linux-core-2.37.4-20.el9.x86_64 which-2.21-29.el9.x86_64 xz-5.2.5-8.el9_0.x86_64 xz-libs-5.2.5-8.el9_0.x86_64 zip-3.0-35.el9.x86_64 zlib-1.2.11-40.el9.x86_64 zstd-1.5.1-2.el9.x86_64 Complete! Finish: installing minimal buildroot with dnf Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: alternatives-1.24-1.el9_5.1.x86_64 ansible-srpm-macros-1-16.el9.noarch audit-libs-3.1.5-1.el9.x86_64 basesystem-11-13.el9.noarch bash-5.1.8-9.el9.x86_64 binutils-2.35.2-54.el9.x86_64 binutils-gold-2.35.2-54.el9.x86_64 bzip2-1.0.8-10.el9_5.x86_64 bzip2-libs-1.0.8-10.el9_5.x86_64 ca-certificates-2024.2.69_v8.0.303-91.4.el9_4.noarch coreutils-8.32-36.el9.x86_64 coreutils-common-8.32-36.el9.x86_64 cpio-2.13-16.el9.x86_64 cracklib-2.9.6-27.el9.x86_64 cracklib-dicts-2.9.6-27.el9.x86_64 crypto-policies-20240828-2.git626aa59.el9_5.noarch curl-7.76.1-31.el9.x86_64 cyrus-sasl-lib-2.1.27-21.el9.x86_64 debugedit-5.0-5.el9.x86_64 diffutils-3.7-12.el9.x86_64 dwz-0.14-3.el9.x86_64 ed-1.14.2-12.el9.x86_64 efi-srpm-macros-6-2.el9_0.noarch elfutils-0.191-4.el9.x86_64 elfutils-debuginfod-client-0.191-4.el9.x86_64 elfutils-default-yama-scope-0.191-4.el9.noarch elfutils-libelf-0.191-4.el9.x86_64 elfutils-libs-0.191-4.el9.x86_64 epel-rpm-macros-9-15.el9.noarch file-5.39-16.el9.x86_64 file-libs-5.39-16.el9.x86_64 filesystem-3.16-5.el9.x86_64 findutils-4.8.0-7.el9.x86_64 fonts-srpm-macros-2.0.5-7.el9.1.noarch forge-srpm-macros-0.4.0-1.el9.noarch fpc-srpm-macros-1.3-7.el9.noarch gawk-5.1.0-6.el9.x86_64 gdb-minimal-14.2-3.el9.x86_64 gdbm-libs-1.23-1.el9.x86_64 ghc-srpm-macros-1.5.0-6.el9.noarch ghc-srpm-macros-epel-1.9.1-1.el9.noarch glibc-2.34-125.el9_5.1.x86_64 glibc-common-2.34-125.el9_5.1.x86_64 glibc-gconv-extra-2.34-125.el9_5.1.x86_64 glibc-minimal-langpack-2.34-125.el9_5.1.x86_64 gmp-6.2.0-13.el9.x86_64 go-srpm-macros-3.6.0-3.el9.noarch go-srpm-macros-epel-3.6.0.1-1.el9.noarch gpg-pubkey-3228467c-613798eb gpg-pubkey-5a6340b3-6229229e gpg-pubkey-fd431d51-4ae0493b grep-3.6-5.el9.x86_64 groff-base-1.22.4-10.el9.x86_64 gzip-1.12-1.el9.x86_64 info-6.7-15.el9.x86_64 kernel-srpm-macros-1.0-13.el9.noarch keyutils-libs-1.6.3-1.el9.x86_64 krb5-libs-1.21.1-4.el9_5.x86_64 libacl-2.3.1-4.el9.x86_64 libarchive-3.5.3-4.el9.x86_64 libattr-2.5.1-3.el9.x86_64 libblkid-2.37.4-20.el9.x86_64 libbrotli-1.0.9-7.el9_5.x86_64 libcap-2.48-9.el9_2.x86_64 libcap-ng-0.8.2-7.el9.x86_64 libcom_err-1.46.5-5.el9.x86_64 libcurl-7.76.1-31.el9.x86_64 libdb-5.3.28-54.el9.x86_64 libeconf-0.4.1-4.el9.x86_64 libevent-2.1.12-8.el9_4.x86_64 libfdisk-2.37.4-20.el9.x86_64 libffi-3.4.2-8.el9.x86_64 libgcc-11.5.0-5.el9_5.x86_64 libgcrypt-1.10.0-11.el9.x86_64 libgomp-11.5.0-5.el9_5.x86_64 libgpg-error-1.42-5.el9.x86_64 libidn2-2.3.0-7.el9.x86_64 libmount-2.37.4-20.el9.x86_64 libnghttp2-1.43.0-6.el9.x86_64 libpkgconf-1.7.3-10.el9.x86_64 libpsl-0.21.1-5.el9.x86_64 libpwquality-1.4.4-8.el9.x86_64 libselinux-3.6-1.el9.x86_64 libsemanage-3.6-2.1.el9_5.x86_64 libsepol-3.6-1.el9.x86_64 libsigsegv-2.13-4.el9.x86_64 libsmartcols-2.37.4-20.el9.x86_64 libssh-0.10.4-13.el9.x86_64 libssh-config-0.10.4-13.el9.noarch libstdc++-11.5.0-5.el9_5.x86_64 libtasn1-4.16.0-8.el9_1.x86_64 libunistring-0.9.10-15.el9.x86_64 libutempter-1.2.1-6.el9.x86_64 libuuid-2.37.4-20.el9.x86_64 libverto-0.3.2-3.el9.x86_64 libxcrypt-4.4.18-3.el9.x86_64 libxml2-2.9.13-6.el9_5.1.x86_64 libzstd-1.5.1-2.el9.x86_64 lua-libs-5.4.4-4.el9.x86_64 lua-srpm-macros-1-6.el9.noarch lz4-libs-1.9.3-5.el9.x86_64 mpfr-4.1.0-7.el9.x86_64 ncurses-6.2-10.20210508.el9.x86_64 ncurses-base-6.2-10.20210508.el9.noarch ncurses-libs-6.2-10.20210508.el9.x86_64 ocaml-srpm-macros-6-6.el9.noarch openblas-srpm-macros-2-11.el9.noarch openldap-2.6.6-3.el9.x86_64 openssl-3.2.2-6.el9_5.1.x86_64 openssl-fips-provider-3.0.7-6.el9_5.x86_64 openssl-fips-provider-so-3.0.7-6.el9_5.x86_64 openssl-libs-3.2.2-6.el9_5.1.x86_64 p11-kit-0.25.3-3.el9_5.x86_64 p11-kit-trust-0.25.3-3.el9_5.x86_64 pam-1.5.1-22.el9_5.x86_64 patch-2.7.6-16.el9.x86_64 pcre-8.44-4.el9.x86_64 pcre2-10.40-6.el9.x86_64 pcre2-syntax-10.40-6.el9.noarch perl-AutoLoader-5.74-481.el9.noarch perl-B-1.80-481.el9.x86_64 perl-Carp-1.50-460.el9.noarch perl-Class-Struct-0.66-481.el9.noarch perl-Data-Dumper-2.174-462.el9.x86_64 perl-Digest-1.19-4.el9.noarch perl-Digest-MD5-2.58-4.el9.x86_64 perl-Encode-3.08-462.el9.x86_64 perl-Errno-1.30-481.el9.x86_64 perl-Exporter-5.74-461.el9.noarch perl-Fcntl-1.13-481.el9.x86_64 perl-File-Basename-2.85-481.el9.noarch perl-File-Path-2.18-4.el9.noarch perl-File-Temp-0.231.100-4.el9.noarch perl-File-stat-1.09-481.el9.noarch perl-FileHandle-2.03-481.el9.noarch perl-Getopt-Long-2.52-4.el9.noarch perl-Getopt-Std-1.12-481.el9.noarch perl-HTTP-Tiny-0.076-462.el9.noarch perl-IO-1.43-481.el9.x86_64 perl-IO-Socket-IP-0.41-5.el9.noarch perl-IO-Socket-SSL-2.073-2.el9.noarch perl-IPC-Open3-1.21-481.el9.noarch perl-MIME-Base64-3.16-4.el9.x86_64 perl-Mozilla-CA-20200520-6.el9.noarch perl-Net-SSLeay-1.94-1.el9.x86_64 perl-POSIX-1.94-481.el9.x86_64 perl-PathTools-3.78-461.el9.x86_64 perl-Pod-Escapes-1.07-460.el9.noarch perl-Pod-Perldoc-3.28.01-461.el9.noarch perl-Pod-Simple-3.42-4.el9.noarch perl-Pod-Usage-2.01-4.el9.noarch perl-Scalar-List-Utils-1.56-462.el9.x86_64 perl-SelectSaver-1.02-481.el9.noarch perl-Socket-2.031-4.el9.x86_64 perl-Storable-3.21-460.el9.x86_64 perl-Symbol-1.08-481.el9.noarch perl-Term-ANSIColor-5.01-461.el9.noarch perl-Term-Cap-1.17-460.el9.noarch perl-Text-ParseWords-3.30-460.el9.noarch perl-Text-Tabs+Wrap-2013.0523-460.el9.noarch perl-Time-Local-1.300-7.el9.noarch perl-URI-5.09-3.el9.noarch perl-base-2.27-481.el9.noarch perl-constant-1.33-461.el9.noarch perl-if-0.60.800-481.el9.noarch perl-interpreter-5.32.1-481.el9.x86_64 perl-libnet-3.13-4.el9.noarch perl-libs-5.32.1-481.el9.x86_64 perl-mro-1.23-481.el9.x86_64 perl-overload-1.31-481.el9.noarch perl-overloading-0.02-481.el9.noarch perl-parent-0.238-460.el9.noarch perl-podlators-4.14-460.el9.noarch perl-srpm-macros-1-41.el9.noarch perl-subs-1.03-481.el9.noarch perl-vars-1.05-481.el9.noarch pkgconf-1.7.3-10.el9.x86_64 pkgconf-m4-1.7.3-10.el9.noarch pkgconf-pkg-config-1.7.3-10.el9.x86_64 popt-1.18-8.el9.x86_64 publicsuffix-list-dafsa-20210518-3.el9.noarch pyproject-srpm-macros-1.12.0-1.el9.noarch python-srpm-macros-3.9-54.el9.noarch qt5-srpm-macros-5.15.9-1.el9.noarch qt6-srpm-macros-6.6.2-1.el9.noarch readline-8.1-4.el9.x86_64 redhat-release-9.5-0.6.el9.x86_64 redhat-rpm-config-208-1.el9.noarch rpm-4.16.1.3-34.el9.x86_64 rpm-build-4.16.1.3-34.el9.x86_64 rpm-build-libs-4.16.1.3-34.el9.x86_64 rpm-libs-4.16.1.3-34.el9.x86_64 rpmautospec-rpm-macros-0.7.3-1.el9.noarch rust-srpm-macros-17-4.el9.noarch rust-srpm-macros-epel-26.3-1.el9.noarch sed-4.8-9.el9.x86_64 setup-2.13.7-10.el9.noarch shadow-utils-4.9-10.el9_5.x86_64 sqlite-libs-3.34.1-7.el9_3.x86_64 systemd-libs-252-46.el9_5.2.x86_64 tar-1.34-7.el9.x86_64 tzdata-2025a-1.el9.noarch unzip-6.0-57.el9.x86_64 util-linux-2.37.4-20.el9.x86_64 util-linux-core-2.37.4-20.el9.x86_64 which-2.21-29.el9.x86_64 xz-5.2.5-8.el9_0.x86_64 xz-libs-5.2.5-8.el9_0.x86_64 zip-3.0-35.el9.x86_64 zlib-1.2.11-40.el9.x86_64 zstd-1.5.1-2.el9.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.el9.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 3 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root/var/log/dnf.log /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root/var/log/dnf.librepo.log /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root/var/log/dnf.rpm.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-01m0kx71/python-tapyoca/python-tapyoca.spec) Config(child) 1 minutes 26 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/python-tapyoca-0.0.4-1.el9.src.rpm) Config(rhel+epel-9-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/rhel+epel-9-x86_64-bootstrap-1740863293.500096/root. INFO: reusing tmpfs at /var/lib/mock/rhel+epel-9-x86_64-bootstrap-1740863293.500096/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.16.1.3-34.el9.x86_64 python3-dnf-4.14.0-17.el9.noarch python3-dnf-plugins-core-4.3.0-16.el9.noarch yum-4.14.0-17.el9.noarch Finish: chroot init Start: build phase for python-tapyoca-0.0.4-1.el9.src.rpm Start: build setup for python-tapyoca-0.0.4-1.el9.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.el9.src.rpm No matches found for the following disable plugin patterns: local, spacewalk, versionlock Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Copr repository 91 kB/s | 1.5 kB 00:00 Red Hat Enterprise Linux - BaseOS 62 kB/s | 4.1 kB 00:00 Red Hat Enterprise Linux - AppStream 53 kB/s | 4.5 kB 00:00 Red Hat Enterprise Linux - CodeReady Linux Buil 71 kB/s | 4.5 kB 00:00 Extra Packages for Enterprise Linux 9 - x86_64 1.0 MB/s | 35 kB 00:00 Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: python3-devel x86_64 3.9.21-1.el9_5 appstream 249 k Installing dependencies: expat x86_64 2.5.0-3.el9_5.1 baseos 119 k pyproject-rpm-macros noarch 1.12.0-1.el9 codeready-builder 43 k python-rpm-macros noarch 3.9-54.el9 appstream 16 k python3 x86_64 3.9.21-1.el9_5 baseos 30 k python3-libs x86_64 3.9.21-1.el9_5 baseos 8.1 M python3-packaging noarch 23.1-1.el9 copr_base 80 k python3-pip-wheel noarch 21.3.1-1.el9 baseos 1.1 M python3-rpm-generators noarch 12-9.el9 appstream 29 k python3-rpm-macros noarch 3.9-54.el9 appstream 10 k python3-setuptools noarch 69.0.2-1.el9 copr_base 978 k python3-setuptools-wheel noarch 53.0.0-13.el9 baseos 469 k Transaction Summary ================================================================================ Install 12 Packages Total download size: 11 M Installed size: 40 M Downloading Packages: (1/12): python3-packaging-23.1-1.el9.noarch.rpm 5.1 MB/s | 80 kB 00:00 (2/12): python3-setuptools-69.0.2-1.el9.noarch. 43 MB/s | 978 kB 00:00 (3/12): python3-pip-wheel-21.3.1-1.el9.noarch.r 15 MB/s | 1.1 MB 00:00 (4/12): python3-setuptools-wheel-53.0.0-13.el9. 7.0 MB/s | 469 kB 00:00 (5/12): expat-2.5.0-3.el9_5.1.x86_64.rpm 1.8 MB/s | 119 kB 00:00 (6/12): python3-3.9.21-1.el9_5.x86_64.rpm 617 kB/s | 30 kB 00:00 (7/12): python3-libs-3.9.21-1.el9_5.x86_64.rpm 107 MB/s | 8.1 MB 00:00 (8/12): python3-rpm-macros-3.9-54.el9.noarch.rp 201 kB/s | 10 kB 00:00 (9/12): python3-rpm-generators-12-9.el9.noarch. 248 kB/s | 29 kB 00:00 (10/12): python-rpm-macros-3.9-54.el9.noarch.rp 260 kB/s | 16 kB 00:00 (11/12): python3-devel-3.9.21-1.el9_5.x86_64.rp 5.4 MB/s | 249 kB 00:00 (12/12): pyproject-rpm-macros-1.12.0-1.el9.noar 828 kB/s | 43 kB 00:00 -------------------------------------------------------------------------------- Total 43 MB/s | 11 MB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : python-rpm-macros-3.9-54.el9.noarch 1/12 Installing : python3-rpm-macros-3.9-54.el9.noarch 2/12 Installing : expat-2.5.0-3.el9_5.1.x86_64 3/12 Installing : python3-setuptools-wheel-53.0.0-13.el9.noarch 4/12 Installing : python3-pip-wheel-21.3.1-1.el9.noarch 5/12 Installing : python3-3.9.21-1.el9_5.x86_64 6/12 Installing : python3-libs-3.9.21-1.el9_5.x86_64 7/12 Installing : python3-packaging-23.1-1.el9.noarch 8/12 Installing : python3-rpm-generators-12-9.el9.noarch 9/12 Installing : python3-setuptools-69.0.2-1.el9.noarch 10/12 Installing : python3-devel-3.9.21-1.el9_5.x86_64 11/12 Installing : pyproject-rpm-macros-1.12.0-1.el9.noarch 12/12 Running scriptlet: pyproject-rpm-macros-1.12.0-1.el9.noarch 12/12 Verifying : python3-packaging-23.1-1.el9.noarch 1/12 Verifying : python3-setuptools-69.0.2-1.el9.noarch 2/12 Verifying : python3-pip-wheel-21.3.1-1.el9.noarch 3/12 Verifying : python3-setuptools-wheel-53.0.0-13.el9.noarch 4/12 Verifying : expat-2.5.0-3.el9_5.1.x86_64 5/12 Verifying : python3-3.9.21-1.el9_5.x86_64 6/12 Verifying : python3-libs-3.9.21-1.el9_5.x86_64 7/12 Verifying : python3-rpm-generators-12-9.el9.noarch 8/12 Verifying : python3-rpm-macros-3.9-54.el9.noarch 9/12 Verifying : python-rpm-macros-3.9-54.el9.noarch 10/12 Verifying : python3-devel-3.9.21-1.el9_5.x86_64 11/12 Verifying : pyproject-rpm-macros-1.12.0-1.el9.noarch 12/12 Installed products updated. Installed: expat-2.5.0-3.el9_5.1.x86_64 pyproject-rpm-macros-1.12.0-1.el9.noarch python-rpm-macros-3.9-54.el9.noarch python3-3.9.21-1.el9_5.x86_64 python3-devel-3.9.21-1.el9_5.x86_64 python3-libs-3.9.21-1.el9_5.x86_64 python3-packaging-23.1-1.el9.noarch python3-pip-wheel-21.3.1-1.el9.noarch python3-rpm-generators-12-9.el9.noarch python3-rpm-macros-3.9-54.el9.noarch python3-setuptools-69.0.2-1.el9.noarch python3-setuptools-wheel-53.0.0-13.el9.noarch Complete! Finish: build setup for python-tapyoca-0.0.4-1.el9.src.rpm Start: rpmbuild python-tapyoca-0.0.4-1.el9.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.eHwKPS + umask 022 + cd /builddir/build/BUILD + cd /builddir/build/BUILD + rm -rf tapyoca-0.0.4 + /usr/bin/gzip -dc /builddir/build/SOURCES/tapyoca-0.0.4.tar.gz + /usr/bin/tar -xof - + STATUS=0 + '[' 0 -ne 0 ']' + cd tapyoca-0.0.4 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + RPM_EC=0 ++ jobs -p + exit 0 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.MK8uVC + umask 022 + cd /builddir/build/BUILD + cd tapyoca-0.0.4 + echo pyproject-rpm-macros + echo python3-devel + echo 'python3dist(pip) >= 19' + echo 'python3dist(packaging)' + '[' -f pyproject.toml ']' + '[' -f setup.py ']' + echo 'python3dist(setuptools) >= 40.8' + echo 'python3dist(wheel)' + rm -rfv '*.dist-info/' + '[' -f /usr/bin/python3 ']' + mkdir -p /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + echo -n + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 ' + TMPDIR=/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + RPM_TOXENV=py39 + HOSTNAME=rpmbuild + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_buildrequires.py --generate-extras --python3_pkgversion 3 --wheeldir /builddir/build/BUILD/tapyoca-0.0.4/pyproject-wheeldir --output /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-buildrequires Handling setuptools >= 40.8 from default build backend Requirement satisfied: setuptools >= 40.8 (installed: setuptools 69.0.2) Handling wheel from default build backend Requirement not satisfied: wheel Exiting dependency generation pass: build backend + cat /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-buildrequires + rm -rfv '*.dist-info/' + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.el9.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires No matches found for the following disable plugin patterns: local, spacewalk, versionlock Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Copr repository 96 kB/s | 1.5 kB 00:00 Red Hat Enterprise Linux - BaseOS 70 kB/s | 4.1 kB 00:00 Red Hat Enterprise Linux - AppStream 73 kB/s | 4.5 kB 00:00 Red Hat Enterprise Linux - CodeReady Linux Buil 40 kB/s | 4.5 kB 00:00 Extra Packages for Enterprise Linux 9 - x86_64 890 kB/s | 35 kB 00:00 Package pyproject-rpm-macros-1.12.0-1.el9.noarch is already installed. Package python3-devel-3.9.21-1.el9_5.x86_64 is already installed. Package python3-devel-3.9.21-1.el9_5.x86_64 is already installed. Package python3-packaging-23.1-1.el9.noarch is already installed. Package python3-setuptools-69.0.2-1.el9.noarch is already installed. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: python3-pip noarch 22.0.4-1.el9 copr_base 2.7 M python3-wheel noarch 1:0.36.2-8.el9 codeready-builder 87 k Transaction Summary ================================================================================ Install 2 Packages Total download size: 2.7 M Installed size: 13 M Downloading Packages: (1/2): python3-pip-22.0.4-1.el9.noarch.rpm 124 MB/s | 2.7 MB 00:00 (2/2): python3-wheel-0.36.2-8.el9.noarch.rpm 1.4 MB/s | 87 kB 00:00 -------------------------------------------------------------------------------- Total 44 MB/s | 2.7 MB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : python3-wheel-1:0.36.2-8.el9.noarch 1/2 Installing : python3-pip-22.0.4-1.el9.noarch 2/2 Running scriptlet: python3-pip-22.0.4-1.el9.noarch 2/2 Verifying : python3-pip-22.0.4-1.el9.noarch 1/2 Verifying : python3-wheel-1:0.36.2-8.el9.noarch 2/2 Installed products updated. Installed: python3-pip-22.0.4-1.el9.noarch python3-wheel-1:0.36.2-8.el9.noarch Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.N36FPq + umask 022 + cd /builddir/build/BUILD + cd tapyoca-0.0.4 + echo pyproject-rpm-macros + echo python3-devel + echo 'python3dist(pip) >= 19' + echo 'python3dist(packaging)' + '[' -f pyproject.toml ']' + '[' -f setup.py ']' + echo 'python3dist(setuptools) >= 40.8' + echo 'python3dist(wheel)' + rm -rfv '*.dist-info/' + '[' -f /usr/bin/python3 ']' + mkdir -p /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + echo -n + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 ' + TMPDIR=/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + RPM_TOXENV=py39 + HOSTNAME=rpmbuild + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_buildrequires.py --generate-extras --python3_pkgversion 3 --wheeldir /builddir/build/BUILD/tapyoca-0.0.4/pyproject-wheeldir --output /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-buildrequires Handling setuptools >= 40.8 from default build backend Requirement satisfied: setuptools >= 40.8 (installed: setuptools 69.0.2) Handling wheel from default build backend Requirement satisfied: wheel (installed: wheel 0.36.2) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.9/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) -------------------------------------------------------------------- running egg_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' Handling wheel from get_requires_for_build_wheel Requirement satisfied: wheel (installed: wheel 0.36.2) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] } -------------------------------------------------------------------- running dist_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' creating '/builddir/build/BUILD/tapyoca-0.0.4/tapyoca-0.0.4.dist-info' adding license file "LICENSE" (matched pattern "LICEN[CS]E*") + cat /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-buildrequires + rm -rfv tapyoca-0.0.4.dist-info/ removed 'tapyoca-0.0.4.dist-info/top_level.txt' removed 'tapyoca-0.0.4.dist-info/METADATA' removed 'tapyoca-0.0.4.dist-info/LICENSE' removed directory 'tapyoca-0.0.4.dist-info/' + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.el9.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires No matches found for the following disable plugin patterns: local, spacewalk, versionlock Updating Subscription Management repositories. Unable to read consumer identity This system is not registered with an entitlement server. You can use subscription-manager to register. Copr repository 99 kB/s | 1.5 kB 00:00 Red Hat Enterprise Linux - BaseOS 66 kB/s | 4.1 kB 00:00 Red Hat Enterprise Linux - AppStream 65 kB/s | 4.5 kB 00:00 Red Hat Enterprise Linux - CodeReady Linux Buil 80 kB/s | 4.5 kB 00:00 Extra Packages for Enterprise Linux 9 - x86_64 121 kB/s | 35 kB 00:00 Package pyproject-rpm-macros-1.12.0-1.el9.noarch is already installed. Package python3-devel-3.9.21-1.el9_5.x86_64 is already installed. Package python3-devel-3.9.21-1.el9_5.x86_64 is already installed. Package python3-packaging-23.1-1.el9.noarch is already installed. Package python3-pip-22.0.4-1.el9.noarch is already installed. Package python3-setuptools-69.0.2-1.el9.noarch is already installed. Package python3-wheel-1:0.36.2-8.el9.noarch is already installed. Dependencies resolved. Nothing to do. Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.AA5CtZ + umask 022 + cd /builddir/build/BUILD + cd tapyoca-0.0.4 + echo pyproject-rpm-macros + echo python3-devel + echo 'python3dist(pip) >= 19' + echo 'python3dist(packaging)' + '[' -f pyproject.toml ']' + '[' -f setup.py ']' + echo 'python3dist(setuptools) >= 40.8' + echo 'python3dist(wheel)' + rm -rfv '*.dist-info/' + '[' -f /usr/bin/python3 ']' + mkdir -p /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + echo -n + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 ' + TMPDIR=/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + RPM_TOXENV=py39 + HOSTNAME=rpmbuild + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_buildrequires.py --generate-extras --python3_pkgversion 3 --wheeldir /builddir/build/BUILD/tapyoca-0.0.4/pyproject-wheeldir --output /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-buildrequires Handling setuptools >= 40.8 from default build backend Requirement satisfied: setuptools >= 40.8 (installed: setuptools 69.0.2) Handling wheel from default build backend Requirement satisfied: wheel (installed: wheel 0.36.2) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.9/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) -------------------------------------------------------------------- running egg_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' Handling wheel from get_requires_for_build_wheel Requirement satisfied: wheel (installed: wheel 0.36.2) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] } -------------------------------------------------------------------- running dist_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' creating '/builddir/build/BUILD/tapyoca-0.0.4/tapyoca-0.0.4.dist-info' adding license file "LICENSE" (matched pattern "LICEN[CS]E*") + cat /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-buildrequires + rm -rfv tapyoca-0.0.4.dist-info/ removed 'tapyoca-0.0.4.dist-info/top_level.txt' removed 'tapyoca-0.0.4.dist-info/METADATA' removed 'tapyoca-0.0.4.dist-info/LICENSE' removed directory 'tapyoca-0.0.4.dist-info/' + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.5pGjw2 + umask 022 + cd /builddir/build/BUILD + cd tapyoca-0.0.4 + mkdir -p /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 ' + TMPDIR=/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_wheel.py /builddir/build/BUILD/tapyoca-0.0.4/pyproject-wheeldir Processing /builddir/build/BUILD/tapyoca-0.0.4 Preparing metadata (pyproject.toml): started Running command Preparing metadata (pyproject.toml) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) -------------------------------------------------------------------- running dist_info creating /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca.egg-info writing /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca.egg-info/PKG-INFO writing dependency_links to /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca.egg-info/dependency_links.txt writing top-level names to /builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca.egg-info/top_level.txt writing manifest file '/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca.egg-info/SOURCES.txt' reading manifest file '/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file '/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca.egg-info/SOURCES.txt' creating '/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-feb76yti/tapyoca-0.0.4.dist-info' adding license file "LICENSE" (matched pattern "LICEN[CS]E*") Preparing metadata (pyproject.toml): finished with status 'done' Building wheels for collected packages: tapyoca Building wheel for tapyoca (pyproject.toml): started Running command Building wheel for tapyoca (pyproject.toml) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) /usr/lib/python3.9/site-packages/setuptools/dist.py:472: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) -------------------------------------------------------------------- running bdist_wheel running build running build_py creating build creating build/lib creating build/lib/tapyoca copying tapyoca/__init__.py -> build/lib/tapyoca creating build/lib/tapyoca/agglutination copying tapyoca/agglutination/__init__.py -> build/lib/tapyoca/agglutination copying tapyoca/agglutination/data_acquisition.py -> build/lib/tapyoca/agglutination copying tapyoca/agglutination/partitions.py -> build/lib/tapyoca/agglutination copying tapyoca/agglutination/py_names.py -> build/lib/tapyoca/agglutination creating build/lib/tapyoca/covid copying tapyoca/covid/__init__.py -> build/lib/tapyoca/covid copying tapyoca/covid/covid_bar_chart_race.py -> build/lib/tapyoca/covid creating build/lib/tapyoca/darpa copying tapyoca/darpa/__init__.py -> build/lib/tapyoca/darpa copying tapyoca/darpa/darpa.py -> build/lib/tapyoca/darpa creating build/lib/tapyoca/demonyms copying tapyoca/demonyms/__init__.py -> build/lib/tapyoca/demonyms copying tapyoca/demonyms/data_acquisition.py -> build/lib/tapyoca/demonyms creating build/lib/tapyoca/indexing_podcasts copying tapyoca/indexing_podcasts/__init__.py -> build/lib/tapyoca/indexing_podcasts copying tapyoca/indexing_podcasts/prep.py -> build/lib/tapyoca/indexing_podcasts creating build/lib/tapyoca/parquet_deformations copying tapyoca/parquet_deformations/__init__.py -> build/lib/tapyoca/parquet_deformations copying tapyoca/parquet_deformations/parquet_deformations.py -> build/lib/tapyoca/parquet_deformations copying tapyoca/parquet_deformations/py_fonts.py -> build/lib/tapyoca/parquet_deformations creating build/lib/tapyoca/phoneming copying tapyoca/phoneming/__init__.py -> build/lib/tapyoca/phoneming copying tapyoca/phoneming/explore.py -> build/lib/tapyoca/phoneming running egg_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' installing to build/bdist.linux-x86_64/wheel running install running install_lib creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/wheel creating build/bdist.linux-x86_64/wheel/tapyoca copying build/lib/tapyoca/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca creating build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/data_acquisition.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/partitions.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/py_names.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination creating build/bdist.linux-x86_64/wheel/tapyoca/covid copying build/lib/tapyoca/covid/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/covid copying build/lib/tapyoca/covid/covid_bar_chart_race.py -> build/bdist.linux-x86_64/wheel/tapyoca/covid creating build/bdist.linux-x86_64/wheel/tapyoca/darpa copying build/lib/tapyoca/darpa/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/darpa copying build/lib/tapyoca/darpa/darpa.py -> build/bdist.linux-x86_64/wheel/tapyoca/darpa creating build/bdist.linux-x86_64/wheel/tapyoca/demonyms copying build/lib/tapyoca/demonyms/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/demonyms copying build/lib/tapyoca/demonyms/data_acquisition.py -> build/bdist.linux-x86_64/wheel/tapyoca/demonyms creating build/bdist.linux-x86_64/wheel/tapyoca/indexing_podcasts copying build/lib/tapyoca/indexing_podcasts/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/indexing_podcasts copying build/lib/tapyoca/indexing_podcasts/prep.py -> build/bdist.linux-x86_64/wheel/tapyoca/indexing_podcasts creating build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations copying build/lib/tapyoca/parquet_deformations/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations copying build/lib/tapyoca/parquet_deformations/parquet_deformations.py -> build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations copying build/lib/tapyoca/parquet_deformations/py_fonts.py -> build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations creating build/bdist.linux-x86_64/wheel/tapyoca/phoneming copying build/lib/tapyoca/phoneming/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/phoneming copying build/lib/tapyoca/phoneming/explore.py -> build/bdist.linux-x86_64/wheel/tapyoca/phoneming running install_egg_info Copying tapyoca.egg-info to build/bdist.linux-x86_64/wheel/tapyoca-0.0.4-py3.9.egg-info running install_scripts adding license file "LICENSE" (matched pattern "LICEN[CS]E*") creating build/bdist.linux-x86_64/wheel/tapyoca-0.0.4.dist-info/WHEEL creating '/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir/pip-wheel-4icnl5up/.tmp-24ps9ox2/tapyoca-0.0.4-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it adding 'tapyoca/__init__.py' adding 'tapyoca/agglutination/__init__.py' adding 'tapyoca/agglutination/data_acquisition.py' adding 'tapyoca/agglutination/partitions.py' adding 'tapyoca/agglutination/py_names.py' adding 'tapyoca/covid/__init__.py' adding 'tapyoca/covid/covid_bar_chart_race.py' adding 'tapyoca/darpa/__init__.py' adding 'tapyoca/darpa/darpa.py' adding 'tapyoca/demonyms/__init__.py' adding 'tapyoca/demonyms/data_acquisition.py' adding 'tapyoca/indexing_podcasts/__init__.py' adding 'tapyoca/indexing_podcasts/prep.py' adding 'tapyoca/parquet_deformations/__init__.py' adding 'tapyoca/parquet_deformations/parquet_deformations.py' adding 'tapyoca/parquet_deformations/py_fonts.py' adding 'tapyoca/phoneming/__init__.py' adding 'tapyoca/phoneming/explore.py' adding 'tapyoca-0.0.4.dist-info/LICENSE' adding 'tapyoca-0.0.4.dist-info/METADATA' adding 'tapyoca-0.0.4.dist-info/WHEEL' adding 'tapyoca-0.0.4.dist-info/top_level.txt' adding 'tapyoca-0.0.4.dist-info/RECORD' removing build/bdist.linux-x86_64/wheel Building wheel for tapyoca (pyproject.toml): finished with status 'done' Created wheel for tapyoca: filename=tapyoca-0.0.4-py3-none-any.whl size=77342 sha256=c086027af82c697ea086d4a6f970681da512490eeaad1eecc6a3adc077e66f59 Stored in directory: /builddir/.cache/pip/wheels/fb/90/9f/87646deded13046aa0a96bf3193d5d0c4e0c1bbf90674efcd9 Successfully built tapyoca + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.6xpntB + umask 022 + cd /builddir/build/BUILD + '[' /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 '!=' / ']' + rm -rf /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 ++ dirname /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 + mkdir -p /builddir/build/BUILDROOT + mkdir /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 + cd tapyoca-0.0.4 ++ ls /builddir/build/BUILD/tapyoca-0.0.4/pyproject-wheeldir/tapyoca-0.0.4-py3-none-any.whl ++ xargs basename --multiple ++ sed -E 's/([^-]+)-([^-]+)-.+\.whl/\1==\2/' + specifier=tapyoca==0.0.4 + '[' -z tapyoca==0.0.4 ']' + TMPDIR=/builddir/build/BUILD/tapyoca-0.0.4/.pyproject-builddir + /usr/bin/python3 -m pip install --root /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 --prefix /usr --no-deps --disable-pip-version-check --progress-bar off --verbose --ignore-installed --no-warn-script-location --no-index --no-cache-dir --find-links /builddir/build/BUILD/tapyoca-0.0.4/pyproject-wheeldir tapyoca==0.0.4 Using pip 22.0.4 from /usr/lib/python3.9/site-packages/pip (python 3.9) Looking in links: /builddir/build/BUILD/tapyoca-0.0.4/pyproject-wheeldir Processing ./pyproject-wheeldir/tapyoca-0.0.4-py3-none-any.whl Installing collected packages: tapyoca Successfully installed tapyoca-0.0.4 + '[' -d /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/bin ']' + rm -f /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-ghost-distinfo + site_dirs=() + '[' -d /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages ']' + site_dirs+=("/usr/lib/python3.9/site-packages") + '[' /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib64/python3.9/site-packages '!=' /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages ']' + '[' -d /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib64/python3.9/site-packages ']' + for site_dir in ${site_dirs[@]} + for distinfo in /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64$site_dir/*.dist-info + echo '%ghost /usr/lib/python3.9/site-packages/tapyoca-0.0.4.dist-info' + sed -i s/pip/rpm/ /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages/tapyoca-0.0.4.dist-info/INSTALLER + PYTHONPATH=/usr/lib/rpm/redhat + /usr/bin/python3 -B /usr/lib/rpm/redhat/pyproject_preprocess_record.py --buildroot /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 --record /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages/tapyoca-0.0.4.dist-info/RECORD --output /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-record + rm -fv /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages/tapyoca-0.0.4.dist-info/RECORD removed '/builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages/tapyoca-0.0.4.dist-info/RECORD' + rm -fv /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages/tapyoca-0.0.4.dist-info/REQUESTED removed '/builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages/tapyoca-0.0.4.dist-info/REQUESTED' ++ wc -l /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-ghost-distinfo ++ cut -f1 '-d ' + lines=1 + '[' 1 -ne 1 ']' + RPM_PERCENTAGES_COUNT=8 + /usr/bin/python3 /usr/lib/rpm/redhat/pyproject_save_files.py --output-files /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-files --output-modules /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-modules --buildroot /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 --sitelib /usr/lib/python3.9/site-packages --sitearch /usr/lib64/python3.9/site-packages --python-version 3.9 --pyproject-record /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-record --prefix /usr '*' +auto + /usr/bin/find-debuginfo -j4 --strict-build-id -m -i --build-id-seed 0.0.4-1.el9 --unique-debug-suffix -0.0.4-1.el9.x86_64 --unique-debug-src-base python-tapyoca-0.0.4-1.el9.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 --remove-section .gnu.build.attributes -S debugsourcefiles.list /builddir/build/BUILD/tapyoca-0.0.4 find: 'debug': No such file or directory + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 Bytecompiling .py files below /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9 using python3.9 + /usr/lib/rpm/brp-python-hardlink + /usr/lib/rpm/redhat/brp-mangle-shebangs Executing(%check): /bin/sh -e /var/tmp/rpm-tmp.e6h4TB + umask 022 + cd /builddir/build/BUILD + cd tapyoca-0.0.4 ++ cat /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-modules + '[' -z 'tapyoca tapyoca.agglutination tapyoca.agglutination.data_acquisition tapyoca.agglutination.partitions tapyoca.agglutination.py_names tapyoca.covid tapyoca.covid.covid_bar_chart_race tapyoca.darpa tapyoca.darpa.darpa tapyoca.demonyms tapyoca.demonyms.data_acquisition tapyoca.indexing_podcasts tapyoca.indexing_podcasts.prep tapyoca.parquet_deformations tapyoca.parquet_deformations.parquet_deformations tapyoca.parquet_deformations.py_fonts tapyoca.phoneming tapyoca.phoneming.explore' ']' + '[' '!' -f /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-modules ']' + PATH=/builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/sbin + PYTHONPATH=/builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib64/python3.9/site-packages:/builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages + _PYTHONSITE=/builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib64/python3.9/site-packages:/builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64/usr/lib/python3.9/site-packages + PYTHONDONTWRITEBYTECODE=1 + /usr/bin/python3 -s /usr/lib/rpm/redhat/import_all_modules.py -f /builddir/build/BUILD/python-tapyoca-0.0.4-1.el9.x86_64-pyproject-modules -t Check import: tapyoca + RPM_EC=0 ++ jobs -p + exit 0 Processing files: python3-tapyoca-0.0.4-1.el9.noarch Provides: python-tapyoca = 0.0.4-1.el9 python3-tapyoca = 0.0.4-1.el9 python3.9-tapyoca = 0.0.4-1.el9 python3.9dist(tapyoca) = 0.0.4 python3dist(tapyoca) = 0.0.4 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: python(abi) = 3.9 Obsoletes: python39-tapyoca < 0.0.4-1.el9 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.el9.src.rpm Wrote: /builddir/build/RPMS/python3-tapyoca-0.0.4-1.el9.noarch.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.yuOfmg + umask 022 + cd /builddir/build/BUILD + cd tapyoca-0.0.4 + /usr/bin/rm -rf /builddir/build/BUILDROOT/python-tapyoca-0.0.4-1.el9.x86_64 + RPM_EC=0 ++ jobs -p + exit 0 Finish: rpmbuild python-tapyoca-0.0.4-1.el9.src.rpm Finish: build phase for python-tapyoca-0.0.4-1.el9.src.rpm INFO: chroot_scan: 3 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root/var/log/dnf.log /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root/var/log/dnf.librepo.log /var/lib/mock/rhel+epel-9-x86_64-1740863293.500096/root/var/log/dnf.rpm.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/python-tapyoca-0.0.4-1.el9.src.rpm) Config(child) 0 minutes 14 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running RPMResults tool Package info: { "packages": [ { "name": "python-tapyoca", "epoch": null, "version": "0.0.4", "release": "1.el9", "arch": "src" }, { "name": "python3-tapyoca", "epoch": null, "version": "0.0.4", "release": "1.el9", "arch": "noarch" } ] } RPMResults finished