From 1ddaad1c1c37f3553a59f556bc757c4aea585bef Mon Sep 17 00:00:00 2001 From: Andrej Karpathy Date: Sat, 31 Jan 2026 19:12:25 +0000 Subject: [PATCH] nuke midtraining from orbit, it's not as needed now that we have a BOS-aligned dataloader. Also change the README a lot. midtrianing is not yet fully properly erased across the board, but good enough for step 1 --- README.md | 165 ++++++------- dev/scaling_laws_jan26.png | Bin 0 -> 93061 bytes runs/runcpu.sh | 10 +- runs/speedrun.sh | 50 +--- scripts/chat_cli.py | 2 +- scripts/chat_eval.py | 4 +- scripts/chat_sft.py | 487 ++++++++++++++++++++++--------------- scripts/mid_train.py | 386 ----------------------------- 8 files changed, 389 insertions(+), 715 deletions(-) create mode 100644 dev/scaling_laws_jan26.png delete mode 100644 scripts/mid_train.py diff --git a/README.md b/README.md index 89d2ce2..800c5d9 100644 --- a/README.md +++ b/README.md @@ -1,35 +1,62 @@ # nanochat ![nanochat logo](dev/nanochat.png) +![scaling laws](dev/scaling_laws_jan26.png) -> The best ChatGPT that $100 can buy. +nanochat is the simplest experimental harness for training LLMs. It is designed to run on a single GPU node, the code is minimal/hackable, and it covers all major LLM stages including tokenization, pretraining, finetuning, evaluation, inference, and a chat UI. For example, you can train your own GPT-2 capability LLM (which cost ~$50,000 to train in 2019) for only $73 (3 hours of 8XH100 GPU node) and then talk to it in a familiar ChatGPT-like web UI. -This repo is a full-stack implementation of an LLM like ChatGPT in a single, clean, minimal, hackable, dependency-lite codebase. nanochat is designed to run on a single 8XH100 node via scripts like [speedrun.sh](runs/speedrun.sh), that run the entire pipeline start to end. This includes tokenization, pretraining, finetuning, evaluation, inference, and web serving over a simple UI so that you can talk to your own LLM just like ChatGPT. nanochat will become the capstone project of the course LLM101n being developed by Eureka Labs. +For questions about the repo, I recommend either using [DeepWiki](https://deepwiki.com/karpathy/nanochat) from Devin/Cognition to ask questions about the repo, or use the [Discussions tab](https://github.com/karpathy/nanochat/discussions), or come by the [#nanochat](https://discord.com/channels/1020383067459821711/1427295580895314031) channel on Discord. ## Updates -- (Jan 16 2026) The repo is in active development, I am currently fleshing out the pretraining stage. -- (Jan 7 2026) See new post: [nanochat Miniseries v1](https://github.com/karpathy/nanochat/discussions/420) and the associated script [miniseries.sh](runs/miniseries.sh). +- (Jan 31 2026) Major revamp of all scripts/README ongoing, deleting midtraining stage, might be a bit messy briefly... +- (Jan 30 2026) With all the latest improvements we're able to train GPT-2 grade LLM in about $73. The [runs/speedrun.sh](runs/speedrun.sh) script will become the refernece way to train GPT-2 grade model and talk to it. -## Talk to it +## Leaderboard -To get a sense of the endpoint of this repo, you can currently find [nanochat d34](https://github.com/karpathy/nanochat/discussions/314) hosted on [nanochat.karpathy.ai](https://nanochat.karpathy.ai/). This model is now a few months old but it still gives a rough idea of the intelligence you can achieve for approximately $1000. While this model easily outperforms GPT-2 of 2019, it falls dramatically short of modern Large Language Models like GPT-5. When talking to these micro models, you'll see that they make a lot of mistakes, they are a little bit naive and silly and they hallucinate a ton, a bit like children. But what makes nanochat unique is that it is fully yours - fully configurable, tweakable, hackable, and trained by you from start to end. To train and talk to your own, we turn to... +| # | Record time | Description | Date | Commit | Contributors | +|---|-------------|-------------|------|--------|--------------| +| 1 | 3.04 hours | d24 baseline, slightly overtrained | Jan 29 2026 | 348fbb3 | @karpathy | -## Quick start +The primary metric we care about is "time to GPT-2" - the wall clock time needed to outperform the GPT-2 (1.6B) CORE metric on an 8XH100 GPU node. In 2019, the training of GPT-2 cost approximately $50,000 so it is incredible that due to many advances over 7 years across the stack, we can now do so in 3 hours or less, for ~$73 and below. Once your repo is set up (see the [runs/speedrun.sh](runs/speedrun.sh) script for reference), e.g. the way I kicked off the jan29 run is as follows: -The fastest way to feel the magic is to run the speedrun script [speedrun.sh](runs/speedrun.sh), which trains and inferences the $100 tier of nanochat. On an 8XH100 node at $24/hr, this gives a total run time of about 4 hours. Boot up a new 8XH100 GPU box from your favorite provider (e.g. I use and like [Lambda](https://lambda.ai/service/gpu-cloud)), and kick off the training script: +``` +OMP_NUM_THREADS=1 torchrun --standalone --nproc_per_node=8 -m scripts.base_train -- \ + --depth=24 \ + --run=d24-jan29 \ + --model-tag=d24_jan29 \ + --device-batch-size=16 \ + --sample-every=-1 \ + --save-every=-1 \ + --core-metric-max-per-task=-1 \ + --core-metric-every=3000 \ + --target-param-data-ratio=12 +``` + +After 3 hours we get output like this: + +``` +... +wandb: Run summary: +wandb: core_metric 0.25851 +wandb: step 16704 +wandb: total_training_flops 4.330784131228946e+19 +wandb: total_training_time 10949.46713 +``` + +The GPT-2 CORE score (i.e. the target to beat) is 0.256525. So we see that this d24 CORE score is higher (0.25851). Then we look at the `total_training_time`, which is the time of the training iterations alone, excluding all the evaluations and logging, in seconds. We get: `10949/60/60 ~= 3.04` hours, the current record. + +## Getting started + +### Reproduce and talk to GPT-2 + +The most fun you can have is to train your own GPT-2 and talk to it. The entire pipeline to do so is contained in the single file [runs/speedrun.sh](runs/speedrun.sh), which is designed to be run on an 8XH100 GPU node. Currently, at ~$24/hour for these nodes, pretraining GPT-2 grade model takes approximately 3 hours and will set you back about $75. Boot up a new 8XH100 GPU box from your favorite provider (e.g. I use and like [Lambda](https://lambda.ai/service/gpu-cloud)), and kick off the training script: ```bash bash runs/speedrun.sh ``` -Alternatively, since the script runs for 4 hours, I like to launch it like this inside a new screen session `speedrun` (and also log output to `speedrun.log`): - -```bash -screen -L -Logfile speedrun.log -S speedrun bash runs/speedrun.sh -``` - -See the [screen cheatsheet](https://gist.github.com/jctosta/af918e1618682638aa82) if you are less familiar. You can watch it go inside the screen session, or detach with `Ctrl-a d` and `tail speedrun.log` to view progress. Now wait 4 hours. Once it's done, you can talk to your LLM via the ChatGPT-like web UI. Make sure again that your local uv virtual environment is active (run `source .venv/bin/activate`), and serve it: +You mish to do so in a screen session as this will take ~3 hours to run. Once it's done, you can talk to it via the ChatGPT-like web UI. Make sure again that your local uv virtual environment is active (run `source .venv/bin/activate`), and serve it: ```bash python -m scripts.chat_web @@ -43,84 +70,43 @@ And then visit the URL shown. Make sure to access it correctly, e.g. on Lambda u --- -You can also `cat report.md` file which appeared in the project directory and contains the "report card" of the run, i.e. a bunch of evaluations and metrics. At the very end, you'll see a summary table, for example: - ---- - -- Characters: 333,989 -- Lines: 8,304 -- Files: 44 -- Tokens (approx): 83,497 -- Dependencies (uv.lock lines): 2,004 - -| Metric | BASE | MID | SFT | RL | -|-----------------|----------|----------|----------|----------| -| CORE | 0.2219 | - | - | - | -| ARC-Challenge | - | 0.2875 | 0.2807 | - | -| ARC-Easy | - | 0.3561 | 0.3876 | - | -| GSM8K | - | 0.0250 | 0.0455 | 0.0758 | -| HumanEval | - | 0.0671 | 0.0854 | - | -| MMLU | - | 0.3111 | 0.3151 | - | -| ChatCORE | - | 0.0730 | 0.0884 | - | - -Total wall clock time: 3h51m - ---- - -(Your table might be missing the RL number by default). For a lot more information around the speedrun script and what to look for and expect, please refer to the walkthrough that I posted in Discussions of the repo: ["Introducing nanochat: The best ChatGPT that $100 can buy"](https://github.com/karpathy/nanochat/discussions/1). - -## Bigger models - -Unsurprisingly, $100 is not enough to train a highly performant ChatGPT clone. In fact, LLMs are famous for their multi-million dollar capex. For our purposes, I think there are two more scales of interest. First is the ~$300 tier d26 model (i.e. depth=26) that trains in ~12 hours, which slightly outperforms GPT-2 CORE score. Second is the $1000 tier (~41.6 hours), just because it's a nice round number. But both of these are not yet fully supported and therefore not attached here in the master branch yet. - -That said, to give a sense, the example changes needed for the [speedrun.sh](runs/speedrun.sh) file to train a GPT-2 grade model d26 only involve three changes: - -```bash -... -# you'll need to download more data shards for pretraining -# get the number of parameters, multiply 20 to get tokens, multiply by 4.8 to get chars, -# divide by 250 million to get number of shards. todo need to improve this... -python -m nanochat.dataset -n 450 & -... -# use --depth to increase model size. to not oom, halve device batch size 32 -> 16: -torchrun --standalone --nproc_per_node=8 -m scripts.base_train -- --depth=26 --device-batch-size=16 -... -# make sure to use the same later during midtraining: -torchrun --standalone --nproc_per_node=8 -m scripts.mid_train -- --device-batch-size=16 -``` - -That's it! The biggest thing to pay attention to is making sure you have enough data shards to train on (the code will loop and do more epochs over the same training set otherwise, decreasing learning speed a bit), and managing your memory/VRAM, primarily by decreasing the `device_batch_size` until things fit (the scripts automatically compensate by increasing the number of gradient accumulation loops, simply turning parallel compute to sequential compute). - -And a bit more about computing environments that will run nanochat: +A few more notes: - The code will run just fine on the Ampere 8XA100 GPU node as well, but a bit slower. - All code will run just fine on even a single GPU by omitting `torchrun`, and will produce ~identical results (code will automatically switch to gradient accumulation), but you'll have to wait 8 times longer. - If your GPU(s) have less than 80GB, you'll have to tune some of the hyperparameters or you will OOM / run out of VRAM. Look for `--device_batch_size` in the scripts and reduce it until things fit. E.g. from 32 (default) to 16, 8, 4, 2, or even 1. Less than that you'll have to know a bit more what you're doing and get more creative. -- Most of the code is fairly vanilla PyTorch so it should run on anything that supports that - xpu, mps, or etc, but I haven't implemented this out of the box so it might take a bit of tinkering. +- Most of the code is fairly vanilla PyTorch so it should run on anything that supports that - xpu, mps, or etc, but I haven't personally exercised all of these code paths so there might be sharp edges. + +## Research + +If you are a researcher and wish to help improve nanochat, two scripts of interest are [runs/scaling_laws.sh](runs/scaling_laws.sh) and [runs/miniseries.sh](runs/miniseries.sh). See [Jan 7 miniseries v1](https://github.com/karpathy/nanochat/discussions/420) for related documentation. For quick experimentation (~5 min pretraining runs) my favorite scale is to train a 12-layer model (GPT-1 sized), e.g. like this: + +``` +OMP_NUM_THREADS=1 torchrun --standalone --nproc_per_node=8 -m scripts.base_train -- \ + --depth=12 \ + --run="d12" \ + --model-tag="d12" \ + --core-metric-every=999999 \ + --sample-every=-1 \ + --save-every=-1 \ +``` + +This uses wandb (run name "d12"), only runs the CORE metric on last step, and it doesn't sample and save intermediate checkpoints. I like to change something in the code, re-run a d12 (or a d16 etc) and see if it helped, in an iteration loop. + +The overall approach is to treat the depth of the model as the single dial of complexity. By sweeping out the depth, we get increasingly more powerful models. We determine the scaling laws, set the data budget to a compute optimal setting, train a whole miniseries of models of increasing sizes, and compare them to the GPT-2 and GPT-3 miniseries. Right now, beating GPT-2 specifically faster and faster is the most interesting target. ## Running on CPU / MPS -nanochat can be run on CPU or on MPS (if you're on Macbook) in principle, and will automatically try to detect what device is best to run on. The script [runcpu.sh](runs/runcpu.sh) shows a very simple example that will exercise the code paths but basically produce garbage results. Unless you know what you're doing, I basically don't recommend using this script right now and hope to tune it a bit more in the future. +The script [runs/runcpu.sh](runs/runcpu.sh) shows a very simple example of running on CPU or Apple Silicon. It dramatically shrinks the LLM tha tis being trained to make things fit into a reasonable time interval of a few ten minutes of training. You will not get strong results in this way. -## Customization +## Guides -To customize your nanochat, see [Guide: infusing identity to your nanochat](https://github.com/karpathy/nanochat/discussions/139) in Discussions, which describes how you can tune your nanochat's personality through synthetic data generation and mixing that data into midtraining and SFT stages. +I've published a number of guides that might contain helpful information: -Additionally, to add new abilities to nanochat, see [Guide: counting r in strawberry (and how to add abilities generally)](https://github.com/karpathy/nanochat/discussions/164). - -## Questions - -I recommend using [DeepWiki](https://deepwiki.com/karpathy/nanochat) from Devin/Cognition to ask questions of this repo. In the URL of this repo, simply change github.com to deepwiki.com, and you're off. - -You can also come to the [#nanochat Discord channel](https://discord.com/channels/1020383067459821711/1427295580895314031) to ask questions, or use the Discussions. - -## Tests - -I haven't invested too much here but some tests exist, especially for the tokenizer. Run e.g. as: - -```bash -python -m pytest tests/test_engine.py -v -s -``` +- [Oct 13 2025 original nanochat post](https://github.com/karpathy/nanochat/discussions/1) introducing nanochat, though now it contains some deprecated information and the model is a lot older (with worse results) than current master. +- [Jan 7 miniseries v1](https://github.com/karpathy/nanochat/discussions/420) documents the first nanochat miniseries of models. +- To customize your nanochat, see [Guide: infusing identity to your nanochat](https://github.com/karpathy/nanochat/discussions/139) in Discussions, which describes how you can tune your nanochat's personality through synthetic data generation and mixing that data into the SFT stage. +- To add new abilities to nanochat, see [Guide: counting r in strawberry (and how to add abilities generally)](https://github.com/karpathy/nanochat/discussions/164). ## File structure @@ -159,12 +145,11 @@ python -m pytest tests/test_engine.py -v -s │ ├── base_eval.py # Base model: calculate CORE score │ ├── base_loss.py # Base model: calculate bits per byte, sample │ ├── base_train.py # Base model: train -│ ├── chat_cli.py # Chat model (SFT/Mid): talk to over CLI -│ ├── chat_eval.py # Chat model (SFT/Mid): eval tasks -│ ├── chat_rl.py # Chat model (SFT/Mid): reinforcement learning +│ ├── chat_cli.py # Chat model: talk to over CLI +│ ├── chat_eval.py # Chat model: eval tasks +│ ├── chat_rl.py # Chat model: reinforcement learning │ ├── chat_sft.py # Chat model: train SFT -│ ├── chat_web.py # Chat model (SFT/Mid): talk to over WebUI -│ ├── mid_train.py # Chat model: midtraining +│ ├── chat_web.py # Chat model: talk to over WebUI │ ├── tok_eval.py # Tokenizer: evaluate compression rate │ └── tok_train.py # Tokenizer: train it ├── tasks @@ -183,9 +168,9 @@ python -m pytest tests/test_engine.py -v -s ## Contributing -nanochat is nowhere near finished. The goal is to improve the state of the art in micro models that are accessible to work with end to end on budgets of < $1000 dollars. Accessibility is about overall cost but also about cognitive complexity - nanochat is not an exhaustively configurable LLM "framework"; there will be no giant configuration objects, model factories, or if-then-else monsters in the code base. It is a single, cohesive, minimal, readable, hackable, maximally-forkable "strong baseline" codebase designed to run start to end and produce a concrete ChatGPT clone and its report card. +The goal of nanochat is to improve the state of the art in micro models that are accessible to work with end to end on budgets of < $1000 dollars. Accessibility is about overall cost but also about cognitive complexity - nanochat is not an exhaustively configurable LLM "framework"; there are no giant configuration objects, model factories, or if-then-else monsters in the code base. It is a single, cohesive, minimal, readable, hackable, maximally-forkable "strong baseline" codebase designed to run start to end and produce a ChatGPT model you can talk to. Currently, the most interesting part personally is speeding up the latency to GPT-2 (i.e. getting a CORE score above 0.256525). Currently this takes ~3 hours, but by improving the pretraining stage we can improve this further. -Current LLM policy: disclosure. When submitting a PR, please declare any parts that had substantial LLM contribution and that you have not written or that you do not fully understand. +Current AI policy: disclosure. When submitting a PR, please declare any parts that had substantial LLM contribution and that you have not written or that you do not fully understand. ## Acknowledgements diff --git a/dev/scaling_laws_jan26.png b/dev/scaling_laws_jan26.png new file mode 100644 index 0000000000000000000000000000000000000000..e8d1f727d6001e52cec56884626c84e6633c38a7 GIT binary patch literal 93061 zcmZs@2RPU3|39w1j8L|mWK?99Jx-F9Q5wkJviBY>qcSooTSerALfNCTizG6#L-yW; z-{aN!{J!7Me{@~0a}w`y-_QH`e60I@@1m+a*?xxoBqStciVCvoBqY1fkdSN}-Mbs# z`P5$4jQ@x^pVxH0Y-i@|X6R^2qGIT5Z*AvneZ%N4S5rr)8+NvWJp6n-BHVvjI6K=r ziShE<{LfGD*g2Z>e$aOLf=AhBub|~bLPB9kd~JIvopys{8wrV`?Aa^s@e|!{?uWj2 zR?kk`dQ#lI`}G_pEAtIj^+P*%T@t(hP4?g}?!9}14jn%D@HyQs@56^=&!?Z1PTBkI z@F?qFrGa17kGc$R7&m;3*h!_({;(@){Cb3_li9Br_72e)^Dm+t6?~+!|MkaFG&gz2 zLCXLAQ%A~v=D+@ZF!57tHm`L7=j_qrXy!1}*`9MvwWpZl+0pW3Hn`M2-*{cYCeo&R}qwsX1WjnyUs4ioL!-YygMBCZp39Fg}ra?Q;> zI^S8f841K_7Q5QyRZ&w>$t+KG=i2zOvPR^OGT41Da4=>Nw)NKTbe--=c=+(4b;z)- z(uUv4`0H{e=KBhd@r)7jgA4vIy_Tnx5r?0$u|>&pAvxlSX=$8?xvrV4%=F93%bTCv92v2Y3p=9n&hq=r z(o%Ir#hp)&xaM26?Hqp9$7Gb!I5B zz1hY!IqtJRC-%+pT(IGaZZK0#l!!6&adtj^y&DIzXL zLRsimP*^zYx3%ndvG{Y18ux5itGqwcO9|{-F~e%Y4>X%D2{~%eFSX z9-q>#K1U%LWmeoMwN7ehXJ=$=td(uF%eXqYKfl-4c%V82t4A8^5ITtN+HLiFI>hxtBZP4yxc7w@$%B^CX|gw|5=*mTRu2^n|AY z?|$Xy1KyR{8kW_m;%STK)CQYLFYNkTyWYgsOzqA`ZlmA)%@aL#3k&C4i`hxh(t$ns^yq|rjc43IkvVuZTHHn z@BOxVGs%1oR!81%XL|Gu4E}QZRiDijclriZnred4r>GM*#(M5yXSr01DUfFw7%4h9smaaGr zk4`OC`=!Qfy`q{6s;a3roq6%%ZlNgiIXqTpCX1jM=|D|bd9stzVHTFKSFc{#as&k_ zgo=2t@aYygKU0no$k5LJGDWdM?B{MWhOaL!(4zWPVgyWcCJR(mRYgQaO*=Oo)gRED z`le!8A154!C(JSUuzk;Y*Y)aP`a4NUEZX_DRlU8vwz{&ice@>JOHQz}vgW5+d0*Gl z)4MSx?Y-RXrc>smE0RN?CMirR z7aUw49^K7BYT#t8%zfr2#g}u2O)h8M%FTeOh(Wpt*$;>tf)PN>N2`0M!bTHDoXS7o!n zvDz>0=VjT(okPlf(q}hO@_Fu;r#b6m&peQ4@rio+^slb2u9uqGtVupgTCqMR3(K3o zSLf+b#8nSyYw+qhRATK%z8)>Mv5UVvux40j{v}?qBgfRi$tl@&@`rVI(V2#9zm@(y zC$D7$6&H&?;?gi{dVan`NUbK6S={~SJx=U&RcB`$9_Nke=f~Q%ZmV;qV=d|DW`9lhd)|$AfB)cm+>(=J(EeT-bb?$iV7>K{MHo!@IG*%*n=fHz;WDwJe=F^gCKO*h9{C^yn>25%pAM`d^JnYWcR^6Jx$x8_S1|U%dINA#R8LRF~)x@0qim=cW1h z9&T++`z=g#L}Ek{@A3VQghdqK3-q77-D4<%)qyZ+fs(VwSXn(9)ucCMQIKQ|rxo0M zDgwx`T>VeGCAZ+!uwdcU&YEFi$UM zXz~Y;(%*jagm!+oG0kPXm0R=8U!TtfMxDHRzadU|e!180a-ri8w~g1p*Fz)~ffSsq ztRIEg&b!udv9fBVS^Z8R>I{RCo40g<3M1#ajQ{?dE_=suY-d|8tMyMz)MG&_xn7st zt}=RN*ISm~HNA($`(}r=S>O9xvggn98-CoG=rtv}xiS!@Z7XiqBVlA@L_iI`7-gd0 z<=J#PVzPGnJl6MJ^;b<0yM*3Q9l}_HwIA)s<*d4YxHwmM=lEjj**m*e*46~vXAE?7 zbfWoxkC`KaXxv<#hR$Di$zXlUri3<+64An_HM?PN*I< z*CVOQF(1pfHsj379%Jd>*!Romc?=!)_V!-EE4s zFo;fxdpPmqcD+(Z-1}~cOZnC9A*HONHfxAAC8@^zCad5J2Ebyx$?x#9l9cF3d^)EGQ_Td%~le^N5{|jc#*u zv-$vUi2>JmM{e4s6a~tGkNyW-JEiCA1p|_jl5EPcv2QP>tG`fA0;n033=a?AZ6L9+ z&@7y}jIF1?Ny*G?Ls$LIYUHw-q^ZXABMm)%663Zk-8=za-q+5rZ>8G=C(DQZ_=H-) z8Ueako0e~`z1l_xyjMLtP-8PBXjachV%!jWCNGEabwYq4U9azY<~F+V*39D3?B+<% z#Noyy;oam+7ExRpU-GihG=V@RNdfKOk96k0w&?~|A$uP!?v|Cw&ClNiWRciEIcej? zLd&aXO;UY#AGP&jL}VlZ73gmATeWg)dqi@m1Y_pW&Cbr|2|TDWl34h9uG(k0+nnoy z%>|vUvJ`VvTtH-GWdGoxNy{sZMzqv7v%u60_wL=hF=o~hs(1z^y@Rg|1A&`3CZgxs zR+4XwnWx}H%VD`DKB*7rmU(-`u8mfGEAenj25fOPrQay^Lj^LT-Q>-MNv+kA1WS4? zwSOvd88@T7JmwRA^Cn5L*Y*pKx%o*3(vp009$igWuR5+*Qb0*fZOo-z?3%ePMYK`7 zaCWNz=Qsf1$YEv1P`d*Inv{iwCE2hdATMWpnms|%tMw(I#-E*U&o<5z1ODT$H_OfT z-S8|nR!x%XCJDyT`SFRQ;`{gSBF-ZZGp(^}yA1%88?OBt9yV{|psqVfCe0op@8Ry= zH#YX^$2+S+AiW|NOw98}Myvo9Mcony(GNbhwFx&|{%dM#DnqC6+tisG-%^xo4pCF@ zxEU7Ks4~8~vT~4_IU%(KZDZ%|-RfvzdY+@~D99Qdg$OgcSL5g9<$LGc-JS!M-KZX$ zXi2A5R8(}0jL6c=zFk|ZAm%c*Q!{Br2M{Rp`n!*Oq&zwW23}|Edg5QcJg%L8!OzbR zP2-WR##KtnI+h$qvR&VF?{~C+#}{TRsZZ)?#wVw=#M5#HS~QR zwYO_x5V>hfnwy&ws8?H(lasSpv1l97dF;wJHI-DmzvQ%m~t zZB=5a-PpRPBqlA<^S*AeE72Nbh3yE?mUCLRCH9OJotX1+OyB%PgNAzq^soZ-aC&j! zNnS{hed)OIz5Dln;*6fQYzaChAA0ccVVBPvOCwDcwq+qLiKp9=?BuXoCccg;Z__L*ANx{*0%eH?NkWs`TJiBoy>)-+aLZK9>HlM z9y=w44J}F@rEEP;44kR*Enk^)=O(9gQ^@y~nKdvJ`<$`)(UK*6@%uklXBQXWdG-F@ z*dS)C2uh7P;nko%gRc@>n4^=E@$Z${lGR^vKxWYx3lBMyyd$!7i>a8HNZ+FaY_rxC zyj)mRM2z{)wJQMdr)_>L&Fb4L5#ue7ZkwXGxORfBZt!@o?MK zMG7sQBIkQ(+VkCRy)R-W!~})uZByk!814oH2w=EjA@-4z=jr;{Isaa90MU~w4iqOL zR9%Y3|Cr@9oeZFWT)>6g>6QPhj$ZRT@>b)rXJ2L+iVX3e_sB==_u+jiB~h9B!F z@el+8Q>6Z!8St%a)<;1_-DIK9;#!`HjWIeYPeb0&r$@!fJHKL`uog{ zT#i%M-#!Jzzrpc0ZCwy3qenZsrs}I}nVKc;{Am1K(qOsRUE~c0gV8z*@5ZR8L-FzP-=cck zFC4oHN=pa8jYIstrX~q)k2d@vr%UD4LIoVu7}EHS>2^+1?Qn6zTa z2gkeP?G5TXyUCt=E{?f-c^PMGy8oW7v1zLn2OBY$TAvUEnjj8SY|436#Sk9=p;0OR z{iD4lwcHvS8rWl%$sg}d=$A=IdM@+@viKSd)`rvDMn*>~fVhDj7=uUT<<)ivX~@dj zsPT`J zov>|ux7dR!8JzryBf$kdAu3=NR`6}}{*V3=vz63t?m`hP zglbgt=#=(j;a=}Z;_J>H3iVP?7jTfuu0ki9?tK&hBu~iHl$8fTPTg9#j!A+zOU%_C z`@Pg@$0Y6Bl_hTc4=Rh;&`33@m{T>@fXA5Zr<{hrm_j?r&(MGR_cqpJ$C{&9qhHZ7gADEDJdx?V?IlBbI-vdh##%2l>L!s^BEYB zXqcEiF`*Y^Wv6rrl1WG!j6&9Z5V=%yak>|o#9f1*=y{%KD8f7WAz9@PM#fEZ;ONn# zc{%aX1%i@Kc=gS|Rf3O5#!?=WYp;sdCB{Gy)kfLv>{3h9 zy>{A3YcBYW%IfMLi>;`to86o0o%?xscshDQi!P1r_zPUL^hPN$Ezo0fY`ewquB`K? zY^wnm=$p?-tpi1=v9YneaUNAKPTV0({%G(x(9*d;@)uf9^t7&FRQ?h)n4V=q>$#V8j=c#@l<2xZea^ z#CFC!7d<08_mWE^>mUn@oyN61Yc15Lf`S4`#sE`MQPCVEsZP`S@|B+low+lD2$k3ynt0hUfdwL33e0_Sl zGr-@!e8zzU#Lm(Vvz$KP2r-JY4T zCsaUfEk{Rz=2Vpm&~5aljxlyl&iM}Wq>ix{DJkteHCN7_J$u@=D;n%RCC<8Fb1hNJ zsx?D#IsHpZ_P1un>#W~1v~mejVrXbMT1FvCNwnnzv}Ml6z$st>E^Wq&YGC*2I)wuI zej8`9v$ON&di}Nl*Ot1P$`az^iID{Om(3V3frCcyL9_;u?jzDf-vlK#!O?NGNd&KS zTwAyyh1Pbu6T9;Wpq{(xNkl}1+3_4`68C9NSr+TFJ93Q?`tJrf;prRS?&7!9vyINj zCnRXSwV(uj&0VD1OG-*SCQb<1&MgwE`Mux_-G7 zj>klI@e_~vUzc$Dp~dZZ|A4owHcWc67RW&rKP3v84=tjh z(y#rzLm_1VrqGh9{Y0TMoY*22$#`87C{RZfkP@+ij(pgtfNo1;lXXqds<&T$x!OI1ylq(>3ADJC4dUzjixKi+X&7;b{BouyMK6DMS|myC>^ zpFbSJ{wxk1VH$uiee0eAs34RH;zU91|MexoD!&XrQw^x<5E;@UXHicKQ^X> zkxdZvppcNE`9^8Z6rR15to^^`l_YEPm)f8IQ#~fk`RxT zl=eg0GspD%jTtsdel#&0s?~j3URBJ+^lO<6Jv}|d+Joc@fM|$|+11Jae@Hk)MLGyS z1TZAy<>jqtY;1g~ou9?;xB2^Yr|(=Hr<{ew%l@A~jetqp-B*X6ga8k8ejjbkjO!m7 za%rXuJtApILSF{(X#A=ey?X!$LTLZtlO_avX20K21zw$o!vk} zc9HjL4!=?5-emCIfV8x<{;4V3D_J@nm{BhsmL{}{i@-evFm}!0P|yRv34=lMCPP`1 z101q;Y0c0wCaI>7{{66@pW=vwh4?aog0zRx4CJtEAyn*2bT~vHByXBh6se8z0`tTO z28}~zv2*86I$K*?IaGTHgcABomoDXR_yz^lmRi9zs?Kecvch4lbe(ix>M##M&q+y- z^Ca6qiw?#SV#Jh{yS05+Fa{<)RHt{ZilLMelrkVNFmM|k)Kfvyf$NL|nD}``(2oVK zOMe*4G(*eSPHsa)SlUzkqr}$!8GxfP?y1Ah4kq2ayX2^IK3}AmY|%=;V_dzRVH?BjqsH&C-I`SOp2?72MO#(^ zB?E(J^YTwQ;H12*wkw_acEaF&w!n^ziHV8T$HF9xtbe?%%lZXS62^)d(;X9 zE<^(%gD$|3je~;$NKF_}#Du6G$n|`dtr<7ayMIje$a+e11{n6rDJm%NLFzCe8r96~ zEPqK!NwDSjf9R8Z*KR!iqBSQRX*l?e-tv!aIT#wuaOOT5ozv!T4^;Qyh&{f~7B|9> z)k?}Z{g(sU zbsm&A^+t0l0l^QQx*mlwA?UN_eEZHFC~INK71VX#|Jz_Ke6at}@xQTytAsE2B}qCA znk9B*5N#hS@%yI!dxMs-m;axZ>;L=j48CJRTHG{W;zjR-g;8)s);0CRjT4D7q^qo~ zyohcMV@zR!ZTL~w^DTKrB+@BqXsS{6`zR>VuD`P+Tt=|(6m8)XC+-1Z5Dtg~ z=Jd%cFW;kdJP3wBi0>l4zujYP^dUKZ{P>Z?qN{)($fvk_4G{GnR>oY+7Glo(wcYD$ zYi6F};a&Ac+4k6>?K^fXE<^>HC;}U^D@DE0WjSx-|68u@Vt8kAZ^G|$f8Nc*@)1Eb zw($k8$a0gs(!Q@L)sROUZ*JQWE#)Hy(TMO4PD44tls|arP*mzBj#P}e8xMh*aaxF` z`}Rh20OkUk5*bOJZMTw=(tVpaLM|oT0w~|@ldsSpO4g@}^Gwy$8o8?yf1GM*Ytta3 zs1pRLMsEjIACo!Cnt#56Lnk>;X7jn#%i77NY1Pf-yxWrRXy<5se%&)k@U@#u(|$+G zv=+$+HbV?H^aH|6fs9_$+#E@qwC**?35L+r-vXz>XqzEStLNtrfQp#BqNbx$Kqs7B z#E7~K)X8wh(j=c(O--%X(^BHUocKXo^6_+xky;4NL?_Oyw-#y!G7-W$ObJ1a!};!f z1fA;*x*cg4zuT0~Zc@639L4hu!o7U>8!DEN_W{DbsCZccw8LsQ&<+g^-JQ`P^PtLc zKJsv(ye2mdX~oyZw?QT`Qo{d<1q|BSYpN5PD)`Gq%<|$n{C}?B)~0j*jh+%waP3q) zWfZ8^_g5OZ(4mX_Q?v46+JG6s0xQ<_08Bdp?g2^P6ivKju~nN%#Qr9rl^NZ1fx`fu zq^EQ29E5#p78bt;xrYmd@iGA6`AE;{uF0}pfY8pwjDr31XK)93(s2nc!Y3wH8?+iR z12BkhkZ4H*U#-ELCLE{Z*WbR>^#E3}>D5SA_b2or%o=hQDP>HS=I*wy&&~n=-Lw&P z`4)i|L8i;@5m06H!9?K39DCg&Iv3Eoqch#={Ra;YK+d7hD$nM8VUp;ZIKLZA6wnA2 zqE!?C+A$UjGL)aPB-8U0r%GW7iOL^75XKK^l1c5S*CfYAV@>42>9oDV>=jRSZ)}7_KgsUqnO>R)>SznG#B&Yct(6x1b|?)P=qu3h~gdM~7R4ldCLe1HRp%_4Cwt>;CavN?h00+Sd3%Q2^uA<+UD20|vH zqNSC`sUjHxNF)$xSAhcw+7L@TtYE-;Cv+!J9sepL!OfmQY)ycR##QA?>TmeGTbZV#{rM99|sWUsCcFr0s+?nJx(r| z?yS4J2u_3f>fBIXU4>e_h@o)&d5E-driLMGVPX;u0LbS~d3kxauSVweDgaCOet+6; zOwHIC@RAOy<7kjiK`TjFq$K1UuR z&Jhm(prH0kUT4*biD1;&Pwl@v-o{(epW2$GYn9*i^3pLFeM#C=xd1720pU|cz+y9C zR_3CrSO(^n?jH^dJ?7G-Y0s0kQHK9M!Ob1~<_$5&ZM8Jo_BS=&3<{EL7(4Nw0FWc? z;$-I&(5pZISuwFg7%pC+|NrTtxp+u0)RA?vpbUa%-lA-R26?sftUrLJg6ko2qAwo^ zTT|;#c&S#!dAf%=vM%bE3q%gWCxnwC*hO$S!X};@`b?w}p2_$V0T$4=cdvvkTaI`7 zCL|;fE@+5w&tF19LV=B{Y~$*FoKh=&JBsIEWe<E0xC;l6*JW()L0RKDtE zH)L1{PZP}ejmNy%+QMk$l$0An8BoaVVY18Kqk_-Z|JWgA*<*$fO|{*C7(~RyFT+0I zPIyIN)QPpFTr>=HwqMybM+V?9R~lsxwmXnj7+|kK`Dl^LfwY=_^%V^?AP6Baw9pbS zAm;%{x(8-1!TwO#VhvIQ0|TR-`Mjt~qFj(W`H~?1WHcHt>`tAD5k{cv5D?ffo~WX? zHwm@vxjdDbS01E5^CzR_cmR}C1PabOuSYz(YpI=R`Cl`0i-t+e?FD6d_e$L9m$KCFnRA5Af*!1uZ_7*wr)X>MUePi$tDu`4$3Q?a7$crR z05}}19)U!>_YR*UDNnkBgM(9`!GiG*hJHc7Wl%5bhp!Y~*4BFO7LgnxavJU+L5ixX zU&4)D5jkqp_e+4^gr3&=#%WlUkfa}TYkdSyv0fyYD4uf!UMRGYKeA+HH7N7M(rPuM z-AuEZ^lSx%G;tCzsp+A~#ihFw+9KBH&mdXF%X6Q|t~sue#pL&Ub`PPv;(eh?H$r=N z;G}%)Xxc{9v@5s^xV!m{IP|R?lo7%pXOT1`A>Qr*8t$j)N`%1=B+|KLQk19>!G-dz;iAByO;1Z?6%%7XhuMo4Mi51MA!|Ak%wBI~APCO?*197CJ#}DoRPoA{L;Bvc zyI@KU5b2@}3ZX8Tl90vxVb1VXxWgP&L9AzWu}vQdgcsdKC~r_@8BI+(v)o0k0RYs? z?<}tp2?+QGIg6#}eh-}Gfy&wT5P}szV!f;JnMWPgztBwji0mI28$J1jf&Tup@C`{C zC6}4bShndWekRa_5-d-zmKk7tqT3C(ECazAjrkiEh={;>yp@PlQbn zFKbG_%-Q3!tP8oRf>3_o{sCkf#o7?s5&9ttCs9&#SNyAkZn%=b(Nk63V}3}fA2fG!{*IFqUvf(4z&cPOc-K7xvWL9KXf zEVg+7h|!&~yi(NZ4HSCi<;4#?&%!^M2=osR?|In&o8Bxp2EW+(2yZY1W*tH~mRtwO z$X=M#MZjJZtw;eRdj|eW%5XXe<@-V_>EXkhFpN(<+t!*V{XT$<5!5T(tD<~MCPtud zB6{I=KoS)tWuZTa06jWw=c|f_&4<4maoRpZMM|0~YMBe>VC%y9?xr?GKH@2e9gGOa zUZ3SA_@$@oBI<+6=+3pQTBt{;i}=iNpQwwK(6ZHsAoWtORP5O6atFr)=!|KAvIGM8 z4bUzD^~k!nEP-%jy6^6OkH_I7z4P_!*NOeiZ@x5LA`o*<%h1fs3~9a>PFsw$pAuvmTeJN=PH65aU&?T%T+cJ4M@W;rw+k<5mTNCOd5qC^iZDjJ zp}FJ@EhN&YHHK6)brA_yL})qK+5O>UKuN)i@ZSC&Jk1?4Bw8jp7}yImeQ*Pz3&RiX z*>5@3GVXb7#wSsU%y1agk=3K93TTtJ- z$W|hyNrb7Mq3T+)bk`rRH;{pMRl|5Elv+WFM*IUmY?U7Gd6fWi=KGP6)WosH;LjT` zJ<1v}PPe;0$#3#UMAXg`C&))LpQ&8{I?|qj_k%EEo(AF06{=vCc0hP48dv}Px_ngY znanQOqjfmC1)hs1QTOj8&qx8-ECC>3Jg*`)Nia)bg-_1E5ypuZbtJDa+Awa+e@Nl$ z?=4tbYNCR^354ed=YM>1@_o1DK-=Pgp1DT1_*ba95Ni_6sHTFWPFxOvc7M2nFIvJw zfWWEPG&Ymd+`C&v`7tvfUX~|o+8ttIsz*g42l6CWAW#*cT-0Uk5D*qrh#8FArq}w? z5aNlt0v<9&hXfiT!i=WlqHqo3n8gih4_e72no;UJ`nK~{RIi|7@z#!F z#~#a-6WEB7!5H&IL%9CXoj+%F8D;aCemaEoqJ{9?D7cZpWyjdqDs6+eu0S0g#t1L~ z?;wsE0E+2J@(Ml?w+RWwSw)fpk0Qz|IbWIMV}eR4q*iW1p}L1WT5rx2mZ1UV0@TQk zF#68%w%2KB*pSQL08={WpFn;MKgfyVha7u)+^(eQp%zepF;rz^48DMiuHLRBi#x!XZSx2Z2sapmM&2U=P30q(B!meHB8BR*s2c ztV@6)av6_?+~E%01Ed8+&U?fR)=tf-dhxYhsdGN4<%QV1zX=`-I}7m6Ds~O^UkyZ$ z+}aW3R}o{&=B_!x8Zq#5pucq4plUx~OyAIxXWGK!%jD1d)1WU*{f$wMxA&Ioz}~%m z#6s6auoJHcaO6_Htu7#M4aOo)2@BNxq8n}NS0LM%etmkDh@0aTIE}TOfWG6|EWP`+ z+p5l?>?fJRw;NQ=?nhnCcdgUA`p=bqOz#0$6lJE}V6!>a9RkHUC=Tp+RGcj>4UHVU zRe&|}>9Tbm&@OgS(HPLzIe2EK?ObDKgola_LgX^+N*8GWGlW@u7Q%J8aSV;nf$}(| z{nkb~i5CiqkBGmbxtoAXigE3kI5x)FS&ZNh=Gr!040lq%x=^)sEAcK3ejhO31?ep^w z2qm0e%3B1Y<1u7Np@|dKj1avVSW~VuITUOHa&FtP`{Emev)(>Fqwru+YPoT25~oi` zLE~#mRbfQ-dGFr6Z#u)U*&jZ9AcnP&ax0!+2J17sv}A@uLCj%P`CkpcnQ=+NoUZk5 z^;CE35#Kv|&hfZwULbCYgp0p{oJyqmG+J?EU~_tFGYpn_2*hsz>vldI+fYcF1W~{M zt6LybMd*l-L(n`#VF#h&p92j*4i>xIS?p2tb8tdd$#MUO7kBJknnU*2`4YPXx6vnb zvK(jC28Kfdh$V2rgGN;B<)6AxO;SsgCGNJ?0`;ml5hG5&7KuW$}=a`X(ln z;V`^}i^Jmgn;*yZbNC)>G)>?t!piW56BA4?xEsC^k(w!Tom9fV*jbgMAANvi2OdX7 zMP&hagI@JOnqD2u2_tS>!zGsLg*rXz+OCJ~8K;m(vW|`LHKV zl-mph96wttEk`3H{QLLsp!@efwrLDRT<9^#)&%@U?ps04@j@1~v_`1nCv=NWPjyJ- z+37urdRX~rJh9Jou| zI~v^fgjYK-WTijQ%G9lYHw1~Gu5&`RclAp97Dla13o}apP<(|W)U+OS^ zeSCk*eua-)cRxRWXs7}jgJT@~1%u!21TYCYqCPG+)h#bg`7iedwrd{0en>84I}3}vJT6197Wx~$r`6YtWFR@Eecj7yQ2o!{ zra?*#P#Dd zmAH)0lAeFZ3AO#|{p9~g@vqwvK^-aTB+?}`5#{$ViHc#FJ!mac$!c#r41>WXP09!AY*3!e(D6_PB@0; zYI0++heo9wB9ZX!+Q)sQukYoc88WyCa>XMJ2ep#;%h1m>gTdDSR@N8-K49+g$0%c< z4BFpijo|fg*DTMo+{IhA*Np6f2k#}Xqtg(FWF46`G_1l%d3ouim`-ZAI?Kj9C$5au zN>2T*dr(ZnR_=Ny>GIw`7sjRx zv61)0SjaTD+D}~1RML9@?+cl{bfn7+eu09CN|fZ86&$W|Dz$YX|xv|CTG5Fp7f4qeMM7$`C#|rLCSD% zc7CbwUcdJGRCeuqWpsD!Zcc+`2kM)mDjg8FY3|iJ-{gNPJr_G~t@}L{9+a5VK ztw&!IX=%{=jiq+1R2BP@e~cF(-V>-OVp)N$;g_&CyZ@bIo%w_0(zfhb7Q9lyT#ug+(F zR*92ohz$)HB|mH}%x-K)&Vc!2;@|OjH3oaX_BN zqQl^R6j@%J1)pFIBQ#+`bD8QYoaoFCBcvWUyxGp9->mOlB4EVrq0c}Ve;wU+#GOQH zg=9aCv-6dRebAzYKm`rQTPOUU%E`)Ba`1c+V;hP+V<4m*iP&U?%ksgg^MAG|ae190 z?p`U=6GO0Q!@nnnL|Qir*tNs9Dyh|GK!YJ0#W}Zeva#{m_x+8*5&&C|kj;Rs=qoM! zs5m0`adNJIUd;^M^X5h~2N5SESk}IM&j9z}V}=m)(25h+5?DjSACerPz0SxNCb@gM zWXF*sp`l7d$HQ$3!!{m!w?N8c88!hDe-q=hZeHkA$Njqa!nA|RX4H(dz1@uKn8(H# zDa8G*iJsCph!`_S{5#F`oh|mv4tI!e_lu1hj1rgnb&f)^5)et$XYF)=A(2fWBtM8N zt~Kj4nAFAZ8ydvI)WYkf2J(U=F})Cl_s*79S*11nRpDyN-nVvsjTA?T#@jJ6UEcrg zh1KU%*Z=Ow34Or1XXFw&u_WF*y!m2OVw9|->pAfMQ!-bsc`Z#G1r3D~()kFRbnu`- z!teFk{$p2=Jn@vIASS zS9B|kE|$9_Foll9B4Z70tUb?$5jvLZI9>tna@c>rSJkV@IpaESZ#ti)0tMxvUD;~& zwR{5_j$ObXCyCbxSPi|4>L%`horN@d8Ph#LW^0=YO6b zm+jlNUqpAh`&Azi$(sWFXLM{E@CCr-2B2DZL3(w><5TGCB5Xxxtq(*q@iVj)>(B6lOaYN`wd5i8d%nx5u?gvLn5nY3FCyZ%DFLp+k>XBo*`xoNA*n;Q zDd&SoanIT}pWJ~*+*(vfWWZyse0uf{B$=n!`rkO--cYNQ6D35o8zDA>m5d#vs;j%0 zPDRQ>-bN-(7#^!S-V(SmnBer;1TeC{j+2@BYG|`;KRkBc7OSfdEH(*ao^4VMAk;w-MUNONbOmgpnBFs5nplctm7Vkz4p$C-xhBsuHPDx26!b2slHgt7&+j&W&VP~v69AKo~Qe3E_!$Eu5TSQLQDIGm2dD>RP z$eXohOVl*3UE-QQj zpcb+zy7riY2vZO(QN3qVZ%hv!Zv+fSvK@Oqez^0-Px@}4_`j#k#&>OBb;zB&J2lID znwmtS{wRd%|Ia2({r|9vbFfw|oKuLFI|9u)4=PUSrgf_II z*j*23w4NlnZ`bpz%jOr2{^O#NyMoX@sW^5|g7&=q=wIIZJ;lY?@Gc7(g=}t9NZ(Ci zJFb63F1SEQE)OqM!H88kW;>&h=96NVJFb&+Lk};vT_*1Kvjqac81dFnGSbU~OR0WQ zbzk@D*M0w>pq5KV#D((BQac}M#fPT*XcD@(A#{+&%x5h}m6`5zt=QQDJn~*m7Ee0< z?MqV#!?`bBbswaZmw&|d$7G7QV{~j_uQbgq_6{R`Us3YrebfO%*Qt&@wb({Z^ki7ti5#EyRRXcXsm5c4!0bK`bI9fuiL!5{=JEEh~eMGehBgW3)3g}=W53Q zA}Szk?Rs#ifYA`w&=hR#dwphRYYsBSPX|6k<%W|kOzg|NezyN(XegV8!%FgBXpIn9 z`lhA`t!fq8tYAhQOzh(6vI}tiiJP*7Vy$1+mGuRu5Eu7cUw{32u~-aVKB0ueLPLip zuFj#od=8(drVQ_5Kgu6t%DB~^dE>&UpskdhiuR|RkgbOgqvJDc(97?#JgC(%rQZ2? z@B0)+0-bqiD3b;=?L!hbGed!r{WQACV607NU+HqjIkj8w5}w|bV<4YQCqj*a@LE#U zl8DzDBj$1vYS1t)4(!~u>qWXWV#~-jeuk2#3gMUV&TM|XH3SH0d3mYg+G|)c$P0YR zn}ER`YgaP6yZrau0qNf}I|!~n-WCR}5{b;}cyTRCN{0d$adN@6`JqZ3#rM_%hWw=U z9Mo2M%tqgYt?0gwJ212yk2_U!AJE6FHl39IAg#ywb81-cL0m)v!B4+@@CbR5?|xO) z{|i$darH;4y*IkWe&MN)m%MzGmFU~FYUHCwzrUizMS!QcWvHAa#f%6Gtj5pI78As1GMFmgLC zDG*vg)BIsd${MZw0lqs@dNLD_c9L*!snjN{omo=azC&8?V-a&;?vOhGP~Nn8{aUcA zZs*11OJ~N;ysF^Zv2H)u9!@%L9teTzTZWe4_U##xe?x1HNmYeV4zc&BhzK1*mbenR zjRtci(}6hoUV-Ev;_R+dGx+cS@WxQd0^h4W6R>Ummsa45x0)G|xAJW#M_xLf30!P} zRA~Y?layJk3c5<=n+y@#BUN zp*}&inDqQRRjFIvvrWp!77LVO+$jXVJtz!E6isgXcBsC@TGG_j9i-Wvo=omCvA0qB zb^i&w4~B{%$hba?h>+=Ltlr6`Q@{t^S>cqU`{7yVOu6lG}X(#2xa^n7|6B% z#aj5J%IN6Hllv?9_~MwQhC+i9U_4rAWkvRoH?@QFh!mY@qG!Wcj$3bUi|5T}! z)N#wtc{wRovv#DEQ0TT1pNR5z(R3$>L^RMag~SXA$+-YhLm2tEK@V;ApOg-BnDPHo zUf797`8*sP9Lf1IXyJa>vp;%9el)r@_T6yg;>-f*&7aYsQ)uL~Vpv0NMK$n9qi?nb zI)dbfXHJ$T-wEv*IxTY4RVD(L5G*6iJz<^i)M^rj>t#q&5&?Ad5#))zJ(ngtI%|&L z4$6+*q;F2i%gbLzfDmdLF2$8jZrmT_pn0$h;SB?Wrx6Sha&w(x3nOez zH?orRKo+X31+s02FI|_?S#>?P>wVfy3XSJksl_}Tt+6CC^Nab97QJ$LrOJ8;!CzWh z8lmbBP-VZs&N{C7#^8rYgIT@V{HfjtEsLY2b`+8e2Vs}@!4>|9a07G=LeEMLlZu@p zQXY^(5EM=s74;a%A^5c!-OO+jnpJX`%+}_^s-*`+$vM#S;)3+;IL4AinwG4|k)SDh za;vL{@j)H{tUvop;#REtF^#Qu(3IJ>D=H^{%;_Kq}nMLlCel0%j zVgSMDV!O1;>d3xi5%(4S)ZiFPE^()RLfpiz^${1FnI%1=Dgra9pS;#D@3HNM#E+bb z-NYxv0^+!wto6ajcYOkLLzUs#(}j*S~D4uzIcvc7I9$)pZbAQuuYSb@6@#nI`H!y zeaOf_mu$X*q%iy=LG(a016;7mn{dL$<9;Wo96wF8P52c`}WkHY<{ ztgMU(6cJf^5k;V-u1*HYi=k&a$xZ*ss^YWI$?alh5!_VT4jec6sV9N&{=0EMwj)+#AnRI&Teyi{QRG~a=h6sHb>0>k8J;lB9;q;;y#K-1g zvd+MzxeObC1PfG+%NB4^jll?FdLYv%l)245ccGdwuE~hJ?KUENGFdSQ2;85W5sa*#BciMLHaAh%EWtQKAk%?{?eM02U#77LO~s*fSMk zW(%5!cJrwt9{&^e$Pi-1h>Q>)d6SDz?m@e-alwZY-0rYWg||(ljc{pP3OB@xU3;)S zZfDYU6BTx~egkS1X(2 zt5IBTECh(Ao%v`jH2;1F1uT+K)INgjqf^cJ>?X&dEOzbvG~5c#&Tn>N&GeOgEnkk~J~noqPWN0%E&;a1c$?_E~;-?$VV3fM%5FbN=C zy2k4x2+)~*yt^-oS3eQ)?HORCukd8hr5_E65D^ay6x>NBHDI=+%y7D}YtJ5P1_mWq z7R3FQ6>Px1y?YHH=@A!KAb73jHR|Jy>vxngR)_i*h>SNsPL@AM4Kw-07=%7tV3M1^*A}W%l#ZEhPafmv<4=4|uV)#HJt>qFBo0|VHrKBnI>zZVv==Y2;*4AtTL{f>Lq>uqe8 z{GN^*UH4R*qt!ik-nLP4y_lME8*eEbg^T4X?yT*}+_tI`r!+!tqVCd3@c8-czQpXsD(h~$mF&IRG8J`I z_kJ!PuDkq)>=&wDJh(8vM?TE*-*&HG?An_5u~|*q%S9=#shd#xtS`P%*p^jYZLCRp zzguIchMQz&jQ)!+osS|Se*RfGadx)9uSA`Bw(V=mJ>vlkjqtAFmJ{O=+8mMZ0;|71 z3k^N4R#dY#LF|L-RHALv%ZpJ)%ZEQ~lzWJ?`HeyN!R-hNeDZ;d^Kmd*-RIByCi4y3 z`GSHG^Cy4`gHRB`wxRcDXNO5MYaX|7QYZ4&Ydrb?^O5tE^)7{ zykDK~o1PwdNt|Bc)gjtw{vD({R1@ERDIUtuo)$enoq;;0LC)B}`H6y}>NETFzl$sX zm`_LdkiRdUmT^9Ph=(*;UOO-fk4(7@_3-L7?&OX;=9oGhIOylc^apuE@2R$}>{%1z`y=L2Ui}9ad+TeTHH$ynn{6)e z!8q*0<_4e1-Yc)CMuO%J6R&|yUr|BPtfS5+L&E1~?cG-^#U3-7+56ufcaXMKrPrrP z{vuE9@bk8fD8!omhnC6@Py!;=>KbO_18z2YYW7I|K3Gz6`i};zPSwD7fr%_nF5e#<(mOPDEZ^6?q}Y`eV2-EDopR7U50j;c>-9Xw+)r-wt6@=wun z|5W=se7l>VGnT$iF+02UC>w*&UCzWek3GD^g*b@s>bg$djS++e_Uz(Os~9Q-6abRn_5Cd@`E?a-J}<-lkT(V&ZZkbl z#G19;54!o`PCr!U@z_POe)fjJ|LQ-lOa>c=tPJ^!Fdj zY?&h#LPja)T|HlcH_x|gHPW?VGF3Zlt&C=n3LH%T(ZOnYeF|lsY1KxYD(a+Z!ntE?l}(ra zC^g~nM0~8*+gR7O_sIVt?X9D->eudHFz6Cc8brFJL0SRn5D)=DIs^#;=`xUzk_IUO z1w}xRZjkO$y1TpcTf{+;k5d9OPESt4zK8uiWnwVby6Kz#)*KxpD>1H{ za`>%T!NZg(;(><9c0uiy0G0PkxV{KJ680WPpcfu4dO?CnY~khsc@8S<#b^KyY3vOe zTaaUW$u@Vj084EdT)?2?hARN-)mC_^fsj})loI4j^jZL<86hD2S%ubh(ytAdKi4F` zwlQdG#VX0;hwU?2S^7b>cBo@^M12JDJK}hvhqzE}0~yuT*N(r%LLu-#1 zQlYiahfj~R_E1&=IHhepIScN-s?=v9%xYIw{`3;X!nYR@lbPvcyrmdBReefH6*6{T zK17f4L>;I4x)&vtUw~Zl4xBw_?klx9Y{p4ivkyw{NGFA580PteHh_3o6|^lvua*da z+l2V8ftr}1i;oDL*T%~FpIpGFLmV9yfwj=~Coa#@hKBFjpN~0CO&ckjE(L%B%>udpKji%F#we4$?K_5R5Gq1&(Yy%s?0q{g{ zull4Hb*K{Q z2g_fz3i~vL(-GW2t@ji9_sm$p`gLn0Eeqz`cd6#eaES#|1AjXP+OWB7Ph%T3$TV?( zIrdn4S~1v58yGN0W~cSrUc3Fy}S)yiwFoqO`wp3 zM-~l!b}~Zo6R=T$Rnl`n^RYe2&im^IO0Z+7scctW;W;se`6eP=ML}#>pq?Y7rpCJu zb4HN(%zlm4pMkP>4xV+TPke!(U+ROY%bKtae51`GZiHyKzZm(>&OS?_!{G++d&76%SpF55``4OLki3?H}m2H5ZsDm65ZbZ5J_fcT{N6W=U5ZT%Q zMz7~Xd9=uw5j29LP+9`0LsU^p;h7zr@-Q5I7jx+p@3jzb0Rvt?8*LWoR}qr~061VC zlLiGfvgtt-<=_XpGuovKOX1Xui_uR#0RgYm9vaO^hXf*9l|gOxt>eWlpG%8pwtunw z@U9CA!U`WO{;xgPjhi>$|NLoJyv7Ua0v8dbnJ|Omt5Y*89RW1xJ5!M> z8}u&zFmC#nmx!;vFW}x&@BqR7>MNYG;Mbcx@FrA`IG$|vI`cfTyW7w-wK~@%rQc|& znalGq2;Iy>l8&=%`0{~@9?7QDf2vB=(DM!JA(TkQ2M#>Ge!oRFUyBD(DGWC5Eqk^8 zqs}R(Jgt&fDlEP!VDsy4&DNDdTbm>oMGzhgKJ5G+9#mRh!Q)VOze!XiIwa_Vo`pMJ z&4!8R)y5CLm7|Z~X9XDCSYo2(jP9oo-pN~&g=DA?v61@mpIP?K+auRiqFAVBe^TR6 zVj%VGzjoIWmxYXciAhGaD6m58y&(@Cl<@Y_(sHh|3PIAzb)o_9l5prBnw*o{{tmfP z`%J6J`ZL{$U_w?A^PBQ|b2qVW;(s>Gt4WuF=Q(_BW&@8-rgpPNs>JqlW-3&|b29i6 zIGHLsi`_kx)7;q&&}(Dyj(hh|Lkk z(;d?<33D0Mc?wDI+TIPtgVIF$m!j)ESR+_;y^bZ(X8(^Q3PcY5s&!nooyzO)K_3vJ z*JaAHeI~NrAEgVv&7e1t1(8?N2yc`Lexc8Nc`~IlGfRka&+%Sj#39L`?-bnDZgq9< z;AJTNP{%(4rn3rBFaKMXb|hi%F6ze8c;M)}H3|!rj`i<+sK4d(^PJd%X~51gP{u10>ihOO)qkhI zu(`=I4{-vw$SNgyInzg`=PDr{J(c;`I%lnyLp%Kfe&XVOu>Fzsrw7+EeG`*|!=>fF zdL8JmW~SFSrdr^OnG0PXMc=HlP=5X=+fv!~pKANgAU1XZBVUI(vYa~5Vc^>#*#A?C zu9>Yxjc?}yeNTbKxi}ghdEJm%J@aGs+TPT_fO}(b8>-M?HAf^~c^#C1-PlG_w_j7n zr_PQkt#?dAlAF!Q!azvExP4O zmj>~vU2A)LSzWxq~{jluT6l#OEHKm4E5=)@Su0?*%Zr`A&C+{3Tmzspz~2b5-_%=*>LUAY4ZREs~e%~hohA~?9e zd*q#dvJNzIX<$pN`6WDR@(hHl$T|AFHw{47;NMAHZ*E!HhG~^7xY&@nDGVh=0g*`w zkR91s#EZdUZ$BW!D}FH~Jk6vXNk%RS<7{blkOqi(Jgv#-tH*bXi6B4BgMM~8J!Dg( z*c}6fJJy(N%#<%7#@5N&5V(Zw0z9E~6=6i)d~ZRqrxy~qsIaGbv?=aiQ6kc~f!)+P zjG2cgRJ8f~<7Ke`&(Mz|&&l_+U$hB-@lU|S4`yUC+Q4*Kvx8zA&^JfLVHTulFmPD_ zBPO!XfYIV3VCEvPOufVfOISeb&HcU(UE5FwV%a8BQfk7>3!@|$b*8ZMMO64e5d9zZ@4ZT-gLpGOKgck#Gwqs5k^TzLzfYh zmw}}_5)mA|rindrW*@Bs7tW&Rhl$pFl3iyo5lbGZu_HH5U}XfTNI3+740ZviJhPuw z2sDSfKxvHdT!E+{1FEC%k5aDyuNR?Q%!AgvYJC=lJErfANcB`g=|qJ-w71%mHsKB~ zn=~6v-M{BOCVb=QX!}uioRl%F(ZcmwxC}2m<3!2S)d8bL$}=}G+A5|hA0k@YJ{ZYe zgEJd$v3m8&l@MK4x7T!+eKc-xvh)LeC@GD3;l! zo`=7{ehc1fb5gXJCAm_9Hw?mk7pMVI#7BD?alFmtW5qd6T^uAN{tc`yv1qs2>Ga0+ zNO=}D6&^D;(RKgHy#Fx;dE2x)Z`)(K#wFVSUXF3%Xhl}#(-c!u>I(FO3q7UtF_(X9 zL{t**)xly6QS!q?>bfD!h7mn5*a5h~;b$#@;N8HOfMAFKFMSnOi3pw*WaS8ZtC)tf z1`)@A9E|}iZW0pD!X8TJmT+51*ipwTS2A#SmiJ?l#JgH?Q+R*sIt{vm!R2v(z75_f zFn4|Z{GmfA@@tY6G&H^3FPYanwnSx_&O|Lmxp$qUzCcq%l-iqkm&qv4>4#Hjj(*i= zO}1t<4wvW%a<>2&#kL3FS+=EfHW`wlfk%XjAQ~*{@}-{w{4=#OR-0v>Mv~uExzP5o zGG)v+y7+l9o)2hQF#IxuW9FF~P;a1ZdCkAAmOoQO?TReP7#OC##&MJ3I&uS$R*98CJ=~Mt=#)k})B~N4R=l%3%F!a=xW=-Q7|SBhqVprgI>(@;022F=`mph%-B5uyjRjuH>|b}g0>{otwdiI1S4iZ!sozR$p@$_x5-&;RpI9WNiv&7^v6_JYXCQlHE0H zEox$7Uua8U4jxJV9kk&9#+n0X*%{a%5#H^vn>)NRP|jZ2vu(#Tydomc#cX|*R1xxf zn!ET{WaD{&V8qdK-#`Uf1{`>t;L%h>9Wc`NHfyo#Kg$fz+(T4%^4< zd>1u+!Gsh8J0IRR&6+w_gvjyU0$CTb?Wp1aUQrwTCs;mQLtHn|oTh^X7?H=p5_bor z5IjJ%_6B*qwS;~a%z58H=ZGMek!6`ln3xQ>9V`d3hm(?X;p0<*x5>rZt9;edc*bo% z68F!Db3}5%WrJycO&~X?ah0D8D9>Bbb1#a=gy$X$4|>Ha_8w8*3tc^KFHNd2<>12! z_I1s@I9Poa`q<$r1OHAbOeo@hU5Kp|enZu^!&7nJ=FO-%zRAL(L(%k5dd@Z|=!1#t z4}10#*Z7AZUZ{d@2y@qSc1^XBj&{sYj%zfqt#!2L;M-~L|3r4Z)@W_UuBSQ#iNhRUbL2PG; zTMc;!)bqmGF3-lp5UPPm(!bY>@gcyh_RoHl=uP?TSc*wejjQ#Z>%w*@nS^%hP4|LrMVHmz3VE7||<`NT2!^Z~r_UmU!K*KiSiC@$A zI@xOt-)+xIJYv|}+nyS4wCgROBr5)OUBlbFp?@|o=tKL#Y+!(CY=P{Zp#obP6mttE zPwW!{?K`(MM$!DQ83~LUEWp$KsOAWb_135RpDi@b##acIhNC{bXM{iWqm+OCdcWL+ zROZv*+He8tYQs3(K?^Y5N9>NU%R|P5uyF4iH-M@|2L@j@H270sjhqJ+3-VWhc#4D? zfSRCSI~#hB?A$N(87ypGdw1+S3qxNk+9z_(HO>uEycDHBPbHpJCv*^&hTg*dHl6T? zsIC4XGk%<_VMNoel=$fNw`srKwm*zUVJdi*1pgZ;cdv_>xjnl-BR_)c14K(L0y?es z5&<__IzZ*1I{KpP17RgdOaa(aksTD`&4t>a9&GH^P6)L3uWvB!mT6ZD5mObIrW?B zo2fVrR6oREH?1XJFm$Y0XMnD5v`0hGfv4ie+Bu)*+l=k|;X;owKgr}B_798CMZx2! z?It|T)cIpwSn5($aN3C>e3k*5Yre=&k9jU>`aoZ$Tn6URb7nC|KLm-jw}d^(OSXbg z^f>h6etdmP`!1Y;&|DEa)K)oe2S??V1z>Y#v@5Ooh? zy~+ACwPycse=EsVY*hfEN1WFct(IZM^8X^mV^hXQber~+K1`e+Dz3j2p1$K})MHq$ zmKdoqr12f2pkjc&mH%8qQTaZ13CdZb=%#HgJh~2A@#A7^F-f_MGlXPU8Hw!8K7)o$R;%N zWOZJH<`-n7mWr>)FO@rDPx$Zhgo4PI%VGLHgTM|dPz{Vohj(~6A~XXrPFGAO8+F;7lE0l*yq{@L7 z8OZ7WC!?+@J=j+xjP`Iy?T<-pG!4;x2rqXR5=@#Fi&Yo{`ItIg58PO;umRH8GsEM24+p)kI~ z9G5T}3FR&=xA57#(JBU#1>odaCP$I9T81yC9W6Cp&h`|Wpj`7K{xZ#rKk`>zK>=pt zAETl`HzFlMzM%X45*^U;4ULT#?gT_cv!CImO+Top^ksiiEB;AOw5uYame@0$ZSD67 zw#H4q3B&K7Tz@h7^){S^4Qg;P^@~=B1TSBxi{{`LGTq#2{ZpKrGqQ>r;JK7s&b9}B z?cN_+oRU&fAiPDmP}E}XTl2978M-wPtZ)W%R7-aU02_=Rc)=;m3=mc2Pkhrba729K z<6gK_!ckyW{73S}qLy|IxJgGR=Bff*z#q!@8YeyKB~)ieQxsUsv~D#N7RX8FIIZd3 zxOrMe9vSExj0&nF2gCClpF{_5%NmD|)fwj-^29?W^jKYL<5LyI;jVWkiyjs%{5tES zCF|Z}U}63G0n|}RB_$lF_>PWDbM5Jd*V{ji8nzDsCSw^kf%WXh3*RS)%r}ScA5}AvfSMu3qQ;*0I+jy&R%XlxX z4vv3eWiR+L!1aiz>%r*)(oV4u`&iO{Ku^e@0me_RU-4>>1w4*loHi(7bd{<4E|%mb zn|KEk1I=_NT|r<8{gt7w0_s0kHq1#FW7q4Xss%SYkr{?tZCf<(3*x|KMeRY^KJ1f2 z^mfu(+cFYUiSf+8I&4@MUu;1ZPtodsbW6(cMWC)~%>3`js&pH$F(ASmXxH+=k*W}@ z;$gCeIH|9F1}#&Gm-z5^r`fE;GJ?*vl*M+nTN14a-M*dAsqn=cGF6nAPkW)TA=xHn;PCOe#Gw&|U2(qS z5qlR zXdMaYlU}f2-cw6FJn{otk&v z7johDyR?~*l~l(9P44fvn4DAV;l(8_>pQqxt*=yZl?G0rzFhfJS5P@X& z=eutTZb8#*P`1+__Qf})*k<8zDKnf^v%Pcn9r_b^So#iEp%eq^Lj#EG$heKK0U!zy zgFy!}%7`!VzYC%8V0~3Aj7(rqAcUe=1l$}2OAz=3)D1+M9djQ!x?oOY0;QagceRcY zPdw>F0U1f!%+UU(<;2PeJ&2M}{$m=_GrQWRS>yDR1RxRd+*@og_#AH!d1G+!Rn6aB zXT%?rXSR7ri6nF525c@rMqK&(qrN8<-psa|k*&Tp1tfu8-bW%HEdSg}1|2DO%Fp9E zLF>>n=Bd zfSRkj@lVE=XKM_s(XAJ$`Gm7X%a5}K!B!|pm2o#3rlnpID-VM7mG@C2WT6s(XBbY= zzN4cEle@`P9qsryzbjq~()oBss^3(+Crsap=CWGu%Gv7?QL{wO>X6hJG(E%(Bhy5? z3(6#KmmU27@rAVb%*^IUX%xReFR~gh%mTV%Y==Gx)rf|aZRcn`$G9&5OIIAAxzJzbe`&Lk6?(~gEUr=byPVitHy0zEq zIt-unuyW2cAn0n%U=S_u3Fx8LZw_^fdOEw-zWRJ9d zi*L_2b%@wZ_&%6*EBzMk@AgS-l*nPz!@m`;N$9*Oi=F5Xyt0B6jFi=a|mS>7i;{>KZ+8MgZq=!fKMK5`}5?Z2xM0s#$D?E zijNNxxcaDeFmf=(=2i2l=hF8PPak*2RqwIiO^rT;UfgJtC^Rc2xl-WXnpxRUcvtLX z`6r(`dc}XlCLtmM^mhXRlKS?J{rJ49jr^QgB@z$tqlPy()!Y=359dZ88>=hKf;O4uqt*DFQ2xER=Mk8yhW2(CrM3sN+8Z4?79vuC*m zG-x=FB&B}6_e&{n=F3)#p&d^ciIKSQrmC?%Yo^V&)-4O#Z00mSe=Fho{p7y$Pe&!n zI}sxAZrb7WnVm;R8zm%%wMr%V!JSP{iTjvPXNOlBAKJov7tkayIp>o{{_I9slkuD7 z{+|qd$=)L)&M%#uto0%+n7`ZOTaACi;w*A&DL%U(srlvIJY7L8>!6M`agmIY2(+gM zVWGwYTXym)3e~Im0RbkBdZF`GDv|MfGZqa7@+hBOkMDR^y5Z#L+vXY}#EYE|5|_T= zbUC{gkFzm|81Fy&n4XSoyvBZWf!|%bxI@^kDTwo_K{@B`VL4dwm`=_+R{q+S&1Ga7 z|Fwaw`uD3;m{C9UN8Z4g8UQJ58fa%{5HtV|S_q%$z44EP z!F0MnUtTlhh=~%FD%>wZ)W24*zQ|+qL9zd z9Tf5Fm89FV_b1|p(ec4}EemUwJHs<40$!<-AgADR&GY-m?eiknjuW)LPDkJUsoP4c zTd0sVGsZSc0ND${?1#+CN=fisVjJC>4n*wH=#kYJEF)pwB!9I_(y+V@!3Y`&9zF` zlbF%6dm`M`ml}`n>iSAv=m)!%F{qd~pOrlM_AGRHl&R;}p-J(tC2Ve2*HufK@X9+d zI8#jz);5TGyVW>#v+25CK+Qf49<}XVCAW0` zl$vv#d4+od*elcPb*udzyBDfb%<36jc3$I?0_@QsXb#;JX`h;U?xE~h6;^E68j{se)M~J9i z{@Bgq5qnQ}Zig{Gps>yz+u^!1mT{>^27XUwPOf1bYsYxphvP&*Sq=`@ww z`*d#OWtY-d==VS@Lkj(#A~0x3fl3E8Wg}>h!8*JRN^D5ya7U1Jh>r`59bnirP-#wP zDvL{sJ9tEq{B-;J;_U1Yp?xA|g^!8$eumD7s>6r(`2|lL4kU>+&go(_u=+Ht+uOsT%&7om@je%XlPdO3S8%wRwL;j2I@x9ed(UK1Cl=1lO(`J|F5e;Fs6Dt$zH-J6RElV0jwPJ1&0#TM$1)7Yw^SFm%HjyVcsb{am&U_XO_SVvti ztl+Mm9_SE}7rm&Ts&k)FhpWH#tjdZixvv#d-|Hr|*B|7*1WiZ%%U1-4Y5xaLWl&Hy zlzVZ?Ab8}0t&5SDj^??{q60r~)%paT#j?n>Y;J?DzrUq=h3IC@*l(lAfUEGjvD+0O zI9MTR(5n{El3Qo8+B&hQ!eCt2am z)x>_+li-sziS*ENenyf2w-PKX0E~wZsw|+vfE5U6XCw%n2FPa)0y0Sj8#1~N9Ppn> zF?4g`Xcx7)zx&OyNb{i&;MA$B#wUB#bDSp~k)pGEmiK(9v$VRmrIrsFeOlL?wnM!M zA+5L-Zs|=^5(lF0*dOxUSWfY$Esi=0&ru$;YWT|kENbd(Jn?T?$+_IQAYseXri`uU ziC5xt%SAOftB{k`U6MB+@|k@U_x5b-03ivmpCg2;Fa|6-$) zP*7>NJj?ewb%wwblv=P_$iW~*2(A+n90Kl!{CDXfH+e@$WOU)V`iRhwN_Zx}c|$=G zG`P|mH)KQgC~UDlYv)%)^boYl#YB?B{t4*SBH%EGNwg${UxM8caVUJDS0+zLg1JO} zmf%-s!`9J}>V07*J(krc?H?9s_LEbo0C^Xr3!r2}oEy0Hj0Fe7Gai%}06ufxbOsG{tqnatAep)Wp%Tw! zbOpqohM?L)f+IPrkKO{(k)O#pih?q*I&QffUz4K~hG;XU#a%kvvvw762j{~6Oads> z?pXo`Ye~N&ImOHqG-i(`BJWk0TV;^|)-EB&y3w?2j#7Z}Ug7NXE;M=va7MHtT0Z1* zL(H93f-LN>B&B#@UX~evv6@KcbBUXr+*9~Ho$jCV!+hV0RE&IPHr2t5S$6=60F;vO zKXHO~)Q~l~0HQENJM}o{X*9xjM?Nxijq}pMJ2X%g+HAUiI=2(P6e;8E9~Dzy6_=?T zO8iq+x_Ec8DfoSY19MyU_Rj81QMWs<)9h)~_op9J?20m^-y4VGDbja;031vBu+b9E z3$O4BOUMPnp}Yr_OoEkx9K|a4foUwQY;7n-_$euqV6tLAhXJ}HCRNG%C?GV^{2?sN z3SW0C2hP>%Q@jk7_ff>p$A&(QZS6R`GX3XEhiF?2OiXzVeekvGJ>wT`P0Aw;ly2bY zet8aBW1js@C(_y}37?t!rqUUQ=M7Rf$@27(a!u7RKW*Qt7Z+N7FI8Mq1oDdh*q6SEgDk6ZtKiL@3Ojj!phX(}rXun}t&wJx70Hee?$O z-|^xy2~C=0M`v`tF<3MGv{YL!9b03D3W5rPcyk3C_QuhJbavNe^uG(mS)O00s)huO zy#ZAUB1LMkmcPyjl$0zoz@p+S7|uGpZJQ z=YUpRD*X}dzX1=eC;NF04F8hW7O@pi8%R3@Dt#RqDK7GH_l7kGG3Z9%ZbC&+x4@Z& z1QhuL**I0b(96iU1~HVi8ZJtjK?T?Hs9!%|Uqq_$<2rgeXWpU1J*JsEZZ^Ajs-GX> zbS-quXTtFB0sNZw_Rhc9ROOV+_Xm8Ay-sP zOk<&lvHdaA-XBdFvTKYvue_YcnjwnhPEgW@@V39r{UK^q)R_-(o0YCb#eHlDi}aou zwZ{$&s~8aZHh12*#QUA1=#y57GS6L?uK#czK8_RhIc%PI%zDd|oPKNxVAhSddo2ib z7v8a^6E?H|AS)K%MTyc@;|y?(K2hNljo`>XbIPF7 ziU>n})UhbUdM#M4WP|2L^K!W}b*ta?*MUw?dy8KUFdd#cwR=y*CttkO+M4V+WsEyo z>F0a($oT5i?|4N|A!};3nv3z=vo{%8I{;?K7_Ad3L;KMm?o z*Kw{J7j_jj_OPc%$D>5~4C-pA0me6S=Ont!?O4&h8VoE%jJXN#QSIVhA6q4u z+jsp`?(|N9d*+RK@Q`_u^!|EHjM{+UYb(@79zEH(kHT7(>>C zOz;$F1iFq*#v2-Z!L(2SFTb_*^&tA^@)}WE8e>W)?{~TVZjdv6z*TPh+09$iL=_o! zR;NnajPlKDvu%^gKeg3DBtzmL$>=14JMD9KGt&EW!f2}Pz0xud_# zejSvx*pa8CA_GJZvL?R{b}AaNr|j_hUy3#zl1Mby?Qguu&&uk{P$m{CYvbZPmSEAH z6MG!RJovM|&=Npi2M2yXe>~=P&hm&@R2in-3_4`I7jPok|wnozrUVsYzcD-PnT<8KF6Yju`BjNC+#~Gr2sEwwkdxaN0&i{VSXV zQC?Z`WTwlc_gjoPj(*1#?rZp)!0Mdqx&OGvlY|fD;F>|(;Q{$Wqr+as;|Uivb>jD# zrzM8V!`-CY!|i)yqa1@LzDv^HpU&AVS(arKn5DM^t8cbjT%P)AS%n%Ma-Pa7E6pwr z*DQ?4V3DU427+@A)CXpO69%>X6~AH)LAwb)2Oe5q=Ibgce%rIS*Kt))~PkMazfH0!JchT{4 zsJ^B=v(dvVjyYsi6KjbU$K5nGhU=Be*_UfBCpG<%wm)5Je`gt93Q9)3vcIu?cGim~ z;8PXSv6r1VkC*H0`h>%-!BkQ-8RAX)fY}IlQrW-IObG8A+!SEIYyzIk^9e$X<*!$0Z52~^PTC|DKDmDPIm*zeHu3W>epN*e-_WF&ex=X& zb^M}Uly*BD)ARfQALoFg^(Xe_ZCYf)mvPWCfkqv|+3C8(i-Bd8Mb-{a+PHka~rVkw`K?{jm(^krUFIroYofu>_cRbb(a zIzczC*Tve!8$+i-QRCFe?1klA6kIL{?eHir*xTFda9V>aFe6Ykv4v`AWkvd(+vsiK zM*`LC!a_cE4Gl+I@JWV%dn3s*$pZlBv#N-ykWNgQ#r}=(}s>$$&+Po zl~XUd7N%snnq?6%YwJ z3eKQIg!LFXZ8(8slpz@zm&d_)@({|cTel;he_+xE=e*;H8bC6@e$x$N2!v(}G4)@= z`!Q*yF<))psk|tk-lcl-q^X@wnwEJhksRTWy(PU<6Tx#q%i}~5O*K)zI9F3_{ zxAxE9Sqvfj=$pKQd7FdvC=Iduiz1UOHW^Wx>ox50iGC&PrHxfESFjv`u+s%6rb-{r!!+YesW%X>Ogtf z@ZOP*;mwWH!ELYAH_w%KE=qL;w&o+%#NzW!vE{#$Jtw|im9^kmT-3+orOotIVC(vA z0q&c!y6qmf9xJd;x{E6r2qv>-N*!AsxF4MTO7XTktR1^^`0=c?lRkHuAVL8``vKfX;=tBb7qa@;DS%RmTH1nwFH?b2~@guu*(2>8KZmjP{DYW^%- zSp>P#pFt)*I#fH?3JTE|+g-7THzA6lCu}BIb-ocQQqHyj)J7G z-^mOl{)SDA*3hwvz5o1eRY_=viAHeUx!YL!2U_)6zTrKQFg-(f>Ck`qVq#}g*B^4` znqmJp@dmjd-7J~72fq!umeE>mrlO|NRUA6OiPLQkG-VGw3)M`$)!(_&TMqlZeCvH5 zEo5E3+OB@WKfB=R=N&FsKsgme3nz{AO)B8%^bkmOs(CHiNJvY~D8iOdGeA8z*zLVI zHmxYs?ldecaQ7rqOTxW3VA+>h|KQ*N!3TpV*B+X-h{#AwFc!hWj@OwF-dto|DgN=i zwA?qw#&ERs+&w5nq6r!i2RURJ_Efmo0du+yVp358f`c2t5e5e^LE5!Pn%enS2C})w z!1XxcTF-SDL!cUP)czgRCdu%#rl+P*NKdqp)Gj2X0ODPLcQM;KmMkks%5Vur zJV|4U)KE)*u5K%4&gh0e)}}MO-Ko*dZ8(U<^J+M|!ky4Sgv4(Uqja&lXdwg2lgO^l z-COb~eY^#v+pWTPW+RTxPPzMdf4REIi6|);)#(4sn%`yD6}00W))79i!!f>Wy3IP6 zBe&W6%x&TLmfHv7Qy=V^$kZVDP{jwm%-`H^K1|PZIj605eL=R#P9uN+u&Hr>JwTnE zn=w$L8h5UwQ8%FcM&yg6AF;gM)qkV!*3U`jGV3Do>IiWf25_)4e%Q6s(V^shacqI4 z_%*=eX=Lh;Pw`aF)Le4D&99eSMFqbmg+TQq2&8#{rwOSRhzk}IgX=7$0?uO?7a1ZC zwUO!Y@Yb+Jo=3^_LG2re>q;%Dg#Xu-lyGlwaJ9CzA!;|MIS4^R3Ou0R)&0tol^+ko z)y{%a`me2ZtX}z;?3Gl|yP1}Cn2ejA>*ywKIU=@rz{KS8YrW}M)?E+Vkyn4c7pHTb zq%OoK$$VtcCn71wBlMOb5t;)kyZNGRXv7V?z=Q9Rbz$>9v46*&Qt{8B`rzsmp4?78^S$$^b+&_#@yR+M+e0qKQ{WmjfEJ%^;J{Q2`o5?5QDfsxVJ&Q2A^@%i}n%I($i zJvkPw1I<^bR*wX3uKSO>oOWLxCtT0Gyi(iO5L~p5rZHZ39!~9Xn!2lWzM%R_SPX?@ zRm*&UVq9UIe&#SDDY|oV6))erZb6UZbuE{cJj|(&EG~umBKzHX3S`u3N9sNq*m`TM zJXKrXcZZZXIKiYeuwlj;V*LMksJ@uoWp$}Zu`gBq{HO8+8Gqw#0j&_sm|+68Si~XUE1*6 z3@QbWeV`QQS0xVGdxe4d=4nxI(w9dqyBXH4o2SNae24)42KQ83fnH|6TD9jyv3S~+ z&L?;Eb~8M~d)s7$f?=hF?+(YkO*=D})ate*7dD@us=n}-^XiRj!6Ss3R8$RxiD>J&XpPb8Z z9aqhq|9KCfqbbsuu>hI=6`BJf8o;`zgBq9TryoJuT5H=|yXrUJzE!lZ4Eh*jkxPnx zfS@xlCP)H4c6)a>9Y%y@;&5H3SWUS8m z*p?)rW-x7K2xQgT=({g70Q!`PKa!J|ubZ5t=^xd?Fte~Qv9f9dX=X-CbYvO0kd+tdy@#&BaDu={s z%C4>9A3EYrw{JZyko2H5aeU=6bsSjbwvj{~Y$YCu{ebqkJS9s@K3JZ0=2hCgH66#- zbEQ%Kwk{UFT?we0_WOHIdnLMsmbPlrh%Qo{h#Qg*1Q2e`fIv zek8bVzcVL4QuM;Vs!9wAwE#XKghuAWHW8GQ#{d!t{^!@}+Me~hMQCX`wT{bc4(UhG zTB@a0!rl}iS|LdrY;0^M%{L?iB-684t&Fdn?D6s4ieYv>aO*x?ezo;iE6*%w`kdYJpb{x6a;kQD+Oys`TLxckrfR*kq186$%N<1DMv z86J+#9!cXq`lG=!NlA=%5IqAI88IBd-X-pQv26|#lj#_sPL)kpRk>rccuQR{M~cLo z*P<-YpXtf?so}8_6Q4lIr2rP@8=Fy_hrxsT?`+3|qn={SwhihC0=ElE4TZScjf#-0 zETs4Yy9orA{`N%Iu5{&wp8`HX0MB$!*ZXf)*gSc13s_7IgM%d4*w~=-ZH^I%ZoRU` zD<+l%%_10Xfe^HI=TJ)3`dYC4ZS*w(}A7UzZ#}Xzi3#Rr+HKNJWq90v9FZqZ|e;mVZ`OgBg<} zFI`OOn(3cQeTH;qKS`PHWG=@HVm{x}FbJvV2II|y_o)q1b0Dpx7Y41gWyB57fN!Z@ zawi9luWpn$1IA-Gv}NDZR> zO!#Ml6EhCrP$ou35A^gXExHn?0e%|y=@SoB;~vSurO;|0zel3l!Iv=&*95`g!O6YY zoze`MsNDz1+yRuh zeD1@Rf49-AW4~b&^&Zhap3~T^bml|xI^om_-Z8mAi}n!5FXk3JmwU;H*nRxW)>Zb& z@pYp|e`-W*#C|M@)c@7ZSJFPBU|Eh=(yIhzyqHvq#mdd!E*74s;}pE-gWH}wuee+7 zS%e%p#*VGm4Le2JWeIz#xKUcSS)|8XbPI>h+r4e|aZI2TNAa|B>;LBUjNB<(H7SNa zFR$;`>Y*kn;_rZa({i+!12%{52wYhF%Y$?ba!BMdAC^UQn}#sEz4q4?Ytzk+%er#2T^7><4vy70&dVb#OyEkqbn*ZLhMoePcKG#`l*$bHn8q4 zfo~E8a21d&Pma(k%sV%{BzfrQ-a$fS;wT8VAsZ8*g#JlM&#G${(_&)0-lb1_wGJ?$ zJZin*U>%vPY1v!ccMKLjTJ;*7B>T8!7F1oxVZdNO_N&0D<7xFfiG!3*odX4YAN}>= z>{>^ksK@ck@n=;V1g8R&j4T3Vx=a*L_pB7!#V_J?<0Q7Kx77l1ejWIQ|4iW(v0&3GYG8B5%p7E@j8yDVys|f^r$<2oia86t0 zPBy#KcAZrK%YsutO6m6yg{fA=Zed-H9R4Ws<`A-&2_D=SHd zcPpjYXY00XSv-=-f@X@|lO((R>a(GM&f$V%1Uz7eL#VvGyhz#?WJz`CWb{6VY0fJpS55-~XqW~u{H0JHY8BoJPFwyjzhau4J@tQORi?*ky zrO82n_oB-)$_^Ggp1?=v*Y3|N{I*z9ugc@LjIe0O@e+s(J-eu_`c8^PF^hlR)bx6> z*J|gGjknUnt1x40)xhs}Q{N=s4R^Lrg6@KK4Rfuos9F8hnfJ=i>6s1_rw6{i>?kz9_R1>a#a#15iudBGy~Nbayb{PPzmE*XvW9qW=V;wP z7d&n5`ZBnxM*I~<`ydU8Sv-K?Y8sL5^hcV}0WZ|_#O*_{WhF~r();epNWSNZ}l-w4#o4k&$Hvm*nNaop9MDjO{O{n-9^$U{R2~K3Y-$ z>4%`^kcUDRhEN?&+;Ct=y$T$*46dj=bACW~*osvWKtFMGf}&#oonH6F@rkC7q*o%D zi<{8sic)>opT2vk@0l5zAVu;ctrW04@rsLGem>dwKkBd+8cj|4@xqNayDnDsCV7?A zs96dz?>Jbh^$o2gS2BvUoi=x2oO5GFjV$NeHnaHY8ZR`UGSGFZ5nl&IA4DOx0RTX2 zE-dG-Rj<`U0Aa>%_C; zZT17B(;V!*+{Y#Nh4k0PNCjmn%snfjv#qI4RrSppm{#KspKvCw=U|_CiG8QkTXQ`r zyCr&_n401+xBnbUo;U9`HzIv4@W~nV!>Vsc?S+sY*kH8jEY3~epLf6cU2o9;#Kx-k zi^ehDUpxp9xiYJa*){UxbQ^u#NL)tM3df{6OtjJ(%_3?ir(%`kfEsvj#(~(w63h zX?~Do#>mJBf|9wX)G!wDZ?0M-00}dacJd`N6B;ckU~VBV-R}TMtv*}^@Tj>tBXqxI zGA~4z>=o%A_CD%&!jrY#XW&0&ZPi)4N5UKb>2AuxW5-$EzWlQ%0}u%j32CIeq#Nm8bcZxZcL}`L+B4^z`Et(holkq5Edvkh zx$o=x=bl<0N?!_oV(8IREkT}~FU7UI3nZ7)cG~DB1(G|<^y`|O`%?0l9F#8Ko!>$ggk{ zT3DR>x=j!Y;~mn`NzKiDyqcg&ue9Xx)=xm|?nsD>;f>PuMapqQD=WpT%Zn^^R_Dq5 z^N!xmhs)~Zp-KIcx`7W^Xg|bK8%$LZ_VxtM(YDg=>I`7AQa|Ik4Gcz4%6^(e;|37?#cb*|KMQ=gG4VbS4LT^ zq?QPI25P1{7*;$?9w4fZmE@#oq;Qm5;T7*;)gd$O{y z>>M1>GBXDPXR!+{CvYl)463oaI|wSkfcLn5S8>G8&kxG6Y!GeCdGfqvL$*5Ip@m+n zLoXhv@EI1oefNDj5ap36D;AGRllb{6{rh{vJH{QUmq7{T39A)^m!lcJgglhuSI%w& zSMdp}hi>_a^)0?X@YpD-8<)Z~ZVA{1r;fcrVH#?mHfN!u_QeUj>@F5lm6h`(7hMZo7+$>OvDI{?%i8?)eYF(S41pUBt+RYTcgura%d{$A z(+TlyKA@wHfG>-0_-!X^Gg$wknjlbSHfXNLc>B&h;!k>4zh@EmR;-oN43^@M0Koig zDRteU$4=h%lf!hc3b*R^4peH{&&cQvNQs9L(XYDNJ%1%%IK&q(O}vK9_Z}&yYh;_C zPH~Q-k#J*}mT2Z3t>wxqLe=;1ztFC#*m6YN>5-UH`cFQo?Iu)_f*Ox{#o7bb(GDKs zW3l?kxn&H^m+f|S&T|rmW@<0`eHeIyCcMmn^2DYTj)HM!5IWT%MB=fmEGB@p zXs=-XfLRF{Q;!1GOwCa1kfmQQ3Y0jqNeiAa6qCn$Y!AYjuA%3O+;HX>7k%R6DWC)B zAy}=B<7sPniusOH{JPrWzPJ(G+b4UZGxIifaHg0{$2tx%>UEImu+u#|e0aRDzSKC- zfw2GEr~urOXRc?2BM>?PyfvAcO@T)rvBgx?)IQgdc15Trxo9vaTAb@p-#MPZ0V1HW zK>CgsFO!K@AZjIj#Fa%4Dsjyv4ySqJb#8&*AB6O#?)$hTZ&$zUPP?I(b*i3k#ICp& zo0Dnq)>`VIXY3{z6f+2BhDG|ILD-x%?DN~2qFQ#P(SgzD-?hy zA^h#dbiH(GHWM-Fc&bhmYGIaz0wK|ho{PW|#wu(^bBtY-|6^q78`}jAH;GWlU0ztY zUux=_npDR~QcMiWkC^ChFRJYl`bqtSn4&7a%M`I{*`H$?V56~_Lk&3Ey?e27qN8A= zGzQ%kdR~KQMXF%wCyAn-?d|P9U>fm&whFY10tVixT%qT&2%9p?cnif~ zwi(_i(wgAZ+=EYFhM&Ki__9PP6j&X`N)fAuDE|9kcVsfVWvVA%Upm9fjxCNj}kxEH2Jor&Fs0h<2Lh3uMY}g>AUSS$3H`J&;XDQGWs|{#{k#_ zIY6BNUrGw*2`y+WP&Q&!hLxwNyqunz8da!zK!+Nq#`XQ2`zVTVN#UH~Li3-pk2Ylt zQ_O8%qBE=Bq$BCKSbOVd=iqc7Jye z?w}GjfKi9vyw#=CpEdq2sZjZoaRrmJ%qPjL@<$!wLz8GY1TEowUh$?cDjC(wA~nY} zU%j>P1#GKKk?({zS8|isHSnc`e{xc`c@h-{$;kOU)H7B%t`kFeK0}R-^j<1Cv;SWp zrQl|nUtW&fKFw|pd#0v#Mv)+0-s1gy&74J$04Gr2x|goLWZ7N>Gg?eBV#5AW z@l)@_x73)3E&aadgoJr$`CTJPMMbPLh0+fZu|MhRw*%2c-945%iOffMN}RMH$_pF| zh5(S?0n(kyNIeY(E#YbMli%Ji`)3jucAI@;vEwTPn9KJR^BH7s@5{^8>~>YY{V1|) z&a*PS(45mhX5-AoZ2gx55&j?ouQh#eUF)PvzgxE|;-w7x?kT(XJ)r>+7!m>?fM2=P zTn&Uq@b(}MK?y@Q(GtjEMJ~#zMJu3L{K_H6gNTQRLDRH*ZEI*TX%bU;3aLd$-e}r0+`c!js?2%S^O#R$Q%CaA{493WZpFo{4+!*bZ9x+R3M-MFc`^pU@o87p6on%s?^pE zpI^gt{5!?yXwRwoR@Hv(2nr*d8Js})>W_yznEH$W`S&1@goB^o1l(3gm$5qX}6~C5^v0;8uMkhM7RnC zO9d$T91v*0*2@3~<^N4d`G<#VACUY{!CUm4kcY2K^LRC|i(hQ+Xu(*w?&aH)zdcIu zN6ctA)LTigRwEmKKpwcyk|^fwm2?BYCFNTa!#6I(ov^Y%OV4ZcH2Q5_P8_yt?i~Xy zhQ{;n0@ZY;as>^O`>*wN=RMeuIb7_ySXX{;-AmEX(io$WeM%zx(Ah*MQvOe(J#P`b zq2=cM->oFM4W8?C%PU|m*)}i8@r3k7b1&RRL*uUj-*tY@9km167#R|0N$Tde7S}@T z*Q(|OObxS)j{L1oM)OSH(=Mi@6vcg0SQwj?_8sxj z@VL4OS?ueNnnb7k{9IdQnukMz+=EFRGC__%z9e7~cFu9q)g+S=IsE*;hD z(t5aWeT!PQ#OvaRnd;7NN3pCjAxuO(P6JLF{+}3alxctf14Ql55N8SFM1?I43^XYq z>p_-q+3z<-Kh`E>h5fGUkl>4th>ZD&kxH=yoK{3uxL2K{~s%>_Z>CtI&8HeGgP9b-Ot8GdebHvoJC5fJOQ4>cqDjFPUIs z%vC5Ei`9&1zhf^EE_ocQkppvBOPnNpE>|abL451bg3<{1M8m7uLuKZcSh^!=oteg;!_4H+}G2S#`>~k^g4$Z^Z;3tzzsp zoUX`USC_{-=NQy`X$?}yMy=;pM-fjgMsky&M*|t^4XPtfu$C~Ihd&VXME}09BJzfr z==r027i@}p?)KWsI0gMR?fZ_Hnd0kyUYh3i`f0Jz5y6#O@4X$RsAB*of-!m-nZ~BH zE&5-;IOoG#LMA%@2F7#G9VtMSXAD4o>lYTuXuQ~{&huHTC2!BfWOKAp(9@RD)koR3 zK|Xa5g*aR!EDzDsl|JQY;ArU9T^v2Zf+L}kgovmGA{bDS#p;lufLr+qet#(}AQ*fq8qD{Gd&OC{E2d*!qflb#Jf4OV!^YjqQ#o)QnWR$B;4wzlg$@ z)y52_k}@AtXyjwj>*`t+=_I4Sef##0bOKP+yWE8%%ue@mqC62@e~uK zC=PM>JB07Ak?q-65Wo3;5$Zs*pfHEZicjdZ;> z{W(7Y;^YCH-=iaX+Qe>f z*{`W{6+5B~<)|cVXVgC>+8!AHU;uhDB&-Z%VA$8M4e-6Yg6X*5lm|}4Y2-#}G_LP0 zZq786jRsA=R#r$gLv)GNv|uFLOIhHWii(SydV9Y>?+(b+@u8Q=V#7tk|0j#+4993> zGT5y_7vfjoi|3o)u$ciiwJwmc?Xg@fhADAyax#WLPj-ole)KBSVkx6Asj}8nPn$rs z?(KLYs*r{aCucxkxC{E`<>$l%*E=$A=WbtYp395;t#VFbOaDO6qILNyQ zr#2y9Co_Z&MmH@VO9J62X!sp*|hy+DsIAV8|E zDlPd=_k>k4)NS$BEo2lUeDJg??dafW23Gu)vGLK4Pyl{YOExPSY1W+i@K&7#MM)%! zwFnBkg*yt{L+Ra4{razQ5xXbiS1DdfyP@9DB6-RFWzDUz~hUjKGXaoBG z+wx9y-z3d2HX4q-D#640srR=&PDK9sfc_XZa@Yz{8(LfaAO#l%g6CU7;b7Hm3Xfkp z<0GpMb{;K?yh4(-K8REd-T+)8AIldEYVSBBGcz+}VjgsOBDL({&&sWS^`|&npP$Ks zp6T`T-^tQ%-q1pcW)f6K1i^%itmde&HJm7WEBE5X3*|S2Qq4Wt-05UOzvrIJ@dOa% ztH)DT1b>R4m@TLD&++9&p?__?J2AtXYC__b^sD8UAN%Z@+|?0|UF6%db1LKzD$x{Z zrHBg(ld{zl!q%Rvh?7(vl+Mbr-OWYR15>rh_72QX-@lU>82FWafYvZbp~HFi=9YCe9{btX4%D$!HbOse^0_NfRJTd7yL+lyP>@r~s>U8l5B~3`Ht~OAC|b zlNOkk_QdoI3+gWo}= z+C?N__EPZW>OUV#-@uQ&G>H~yj{~bBP7KblUL1E!en=|$GhNLOnO*PZq70!l1RPbq z$iolP`0feB<0Z>KefsWr=3~<*oD?pr5K01xyWg|3QGhV{1lJ4_HA6BgopvSDk@@uyg)`(`16C`r;>QA;)35ZRK!O>?09} zMEciOdjnf)sc%fKarBi_Z}V?K{>GE`9@B?x6G8E%X!^C2xPvM)UM-pjZX)2$2AVCY zU)auBkXrJNsv0a|Y3BxFoE2^305p&uyHX$Z>k4RWk3g*hOoSKrM1Em_Z|>Ji8#o)A`Z;)~Nl_;(Er}?w-Dwp|#bo zK(F9{1NWUn2ZPg+_o?Y7ON^$eCdLK^Z2Psa*#J?bRmvMRbRh3{ST#wbtxT9;rZVi` z5g-j@EsR_y$X|W-i(5e0$Mi$vDh)lnl($R}jpy6<@+%H@I0*TLMS|mHF?QK!z0O$y zY{Q+B76d&?Ku~t$-cq4g?VMAF;?CLns|O?_n95U_xVVWAB6~I=p6TXqSO}1| zSfJd9VWSk5=g}6nbelIA87=FgUH3;vy!Y~Yq`#t^E@U> zDYHA?^YU+@szj_XZNu8v7S5>pk6C0SHDi|5^hMS+@cw-Ki1hwkLg7?h<02|OE|FAd zGKeu(H>O<6f9wv(^%YPFFHS|_C7TjfWFl8RXe4|e5`sH3GlQaA?-dV%uwo#KC2$#n z5RE0|j?;DbAN`d&Fh@7&STEw+Px)}cj9Y8HZdd0TIlbm`RG8epAp5NA%{)o$X9anq z#$<&93!M<<3!M~R_ML)Sn4xlh+mA-?4O2QIQ=slklPdk=_E#-Cb_uprRBE z>P~ztEO399S@X-EFtXD6Fob>~7uahqO~jW7fADsqp+ChhQuXb`U7=t+BklKyEjJKd zfVTo(yhqMk1iSDJg#a2fRz&aROm5)b?j8)8yvm+2?)FVTa3DT8*0N&sKB7bHDwA`X@7HMgIS1S?~^FjLhLR(kS2Ryn4nwLNG88CrU&48~XfRx%mDThb}tR+dl z1ayJb3K}Xu*$85GaCmh^nQX+LsHwG*Pc=}DBCh@5}b+l55LRUuJ1VZ zmb|7Y*0n~D6u%mRTHHrsy(ug-RQ&1Frv{sf;m;$;rRS{%i7bjc(dr748HSvXx*5$q zJYLfqlM(o-zttY&v==en&RAC&Ef5Z?`y1cIZ`j`6ZG5yxK`!hVaqL&tHq-jEol+lz z8ik9i_!f3mL&9%4TG|XlYeVFI0MT_sd6kTlxbY_NS7Q6CJ zQ{UB~-%@_vzIZM9F`-nY2|V&)k6<^Kv=}kcD$C`B37m$TZDoe}oO5XP(!Oy#{QB%9 zFoL%?CSJYX>?J_i5On)INNwP;Q=iY1xvZ8N!tTRReJM!X7(}P@RJP}F3d5=zQE+nM z`YPS>Wkq0Cfy0~OkrV;6V7p$YZjV~-!)zBafl8n&-anu*nBb|fy`G!RCThblDa;QPR<+_OX4Y{6x4+=b6Q4g-IFK(MpESrUC}9 z6k{x|!*P#GgdR#d<@AT>R-YB7QTBm2;$`C+?o3t%3$?|M+hw6q-xPIK$Rf3iA9(6g zOO79o7U8wBPhpf-(@jOwFZ8Y~4q?w|9o!>8FJ0=)yPJ@$AMJdzw5e6rBDbBc5rXGV z01}0cUphFEMynu`fs`=}Z=l14gu9@$W8|Qi5wfG$Gi!Q}= z{?!$OX#m*;2@@sgJ_Mkzv1d3(Ls=~XP9QR&1Xxc5y&*?{ujK2_E@#k)jV-4W6MVZ8 z;k2*=*IyW)oK>#C`e$?-hgvq%gM=5II>cfyXr8n?H^$)i!sDqHJ6U9tKFTCkjd>O!o9gB3eaaO~X(Ja_^dm ziV4pXJ%S~a=&r;2)S;kHRvFijBJ651@wB~ODXLtbSkRwX*I)2K5KzO&NF*?_bIZ%E zSnM^Ap$s*MLJ31zTP+tuPY;gC0no0(cYi!k!e}{ut;z5G-%U6MBDa%UL8TSI;m42{ z$Eu?;>`Jy`n10QM`pA<-Uc|P{{IKtwDWEa+Y_ol2UCVY2gK1~lx5**rIo9CosJj{| z9XhG!Tjd{FE;jLEFE2ltH3=&BHx;$pV^Al^yMJI`>Q36_Ap6JC3#OpryWb)#O%Mr7 z8Hrxk8it0#!SiToX^C{tiMSsvdS9Ipy^Q!esdSePyMEow@pS(yV6UbA(QCll_MY$|Dws7VmiKapx%%Pcp2VMPQ!PbxQk zDdnhB&AEtFeHqS{77FN4IJ zB@Zg#kssn8J<3BS>Ed$D^3?%8F!ql%K!KRAu+c;_~Y&OpCFFWn0 zs4yqlyw1aRjHNyJ389h1fH`>2@d2+8!9DH;g)i&#ubRFSoN+Epvg(HT)6;OrR*G#s zd;Ia zRv6@T!yRc!%?K{Q(jaN&F~ z>W3OLvBCr<1A7JvycYKb1fr3WUFb3AfOdGKpD^5|sEET5UAGQfn^ivQK47^CC{RIqc20UX;)%$IoInM_6AB>G359zY_9FCUN#_svs_ykx-@Ek-|4_gdN|R$e#*U zmYq7l-|qfgp;v76-F-H=)xza^LJ_IW-CBQu4QATLo3n2hj9*+MjUHz?F~hCj67480 ze6)q_Q8%c){I7FV4;r|o4^txu`Ft3{z72A@EL|i%Q*=L}okPKVWtg@U-KrMIr^)y= zDhe)BjOgg-aAu9$8X6h^Y=FX^sSr41WT=3HU_8uB%4I%&jR$N0`1v{BlD%N83p9k+ z2Sd3riwWwvAjh?0x<(-A-p%D@>0mk^hDQtXdiN7FmmjhX+3S4BYQ1ewpy9FcNo>H& z>y|+4?wHQ1T0}Xh6VI6Gu>{tdc(dnvvT@P->pefl8I3m1zdJ7vQ?|ZB9I?I5I2w}EXD!W+nhMre=; zr=Yc_W?*O^Q}$sH0)M{q3@0Ivun5jYQ5uk41kW-CccJ-v08|aa!ooIocAZG}=s0!s z-b2)S)xOpKCXQrPz<0uBRfh0KOCqk^J zN`QTTZ8c#`lXPO;%u30|b)`5sB2r1_Fg((+0kZ8gUu5BfI&=wXV*|Y%_>#zkBHgl=_tc>g6R8;Zdr9Y^_kJqvfsczknrIl8+kt5$yS^eYVNlM) z7hqSP4yK{BY(chGFP|JW9dg##rFGGOWR5(8oAQLdOC&g~!s+A}VJx*@eyPP08cn&& zD)zASrPkkKT$75C!7vaGwSCroZp#ne-y6R(@_sp(s>hRwZXHkQSy;)gEc2%IouU9n z0kq)Kl#`83!^A{PRMcG%hEVSftQ#DfuTG6~c%wh=ifrwHy@VKap+i}QQTF}+0TChB zFj55!cmj~Vj7YSIgcHxIFb~`kOx(pRU*JgsJ&>4`JStOZ(hM(}4 z8K*0+u~pccU>@$?Mf6Qhe$J&>`0^-XP5q6VtW3C( z#~MHlGu9)c^@H4T+qeSnX&Ljg=}1+aYXvX92jmL|@6JkDc6OYIdt_>AYMc=~ zdtPa{$ew)*HHhL~_$~g(jc~aewX#pvpR1{>E3mMT130dC&?eaX6!8x#BmmgRl+)a` z?l=&DzcReOIYBBEikFN>RDo2DeMwBzSAKIxi9w1}O6un0vY6)2{sug3CHwV!S$44bOv%if_!2c?!K@k>Bu5IHGfj-upmxu$znF# zppn2V5&pV=mytz#pWk~!D9pkYcAy>U*^I*SA|92W^j%;nLy{B91HQh$O3eHae0&^6 zm_y-ZUJ~KFRGtUQy`$ir&436WqzeHqxxQL==YNn!4DZWZ`d+uPF@wo|`P_QN=v{nk zHx*bp>I+xwz-C zbw9&tVz=#^yo^lC75T|MHI8A^01jI{ZF|*p-1X*3EdJz7kiOiZ3~N5|t?$ouk}|HF zLAxmc#*(a(^VJqG6*JA`%ikS}gd6`J=WtjCKnbm1?t`N3a$2IX2gkVkop(CaH0a%lluqef}vU7KZ5O30tTC9W>!iU zAEDp4e5_iI>G5awI(P6h>W95K|HpC61iy?kVbzd^AAu}?jc^zAEW zB%j%{vP6E&DEQaKo&Jza^d!-sBqAj~;tn!vgEWl+LP!4|gv|xWVPPSsp1b`(8j^~7 zJX&5}ek3HcVt$yzAe+T#&Z1Wn1p`Nyhk*oPB+y?=-+%Sqo+#sWB12-0Lsz}+(Mpik z9Vg~Z`3}O==KlQA&io}?)!ZxzQ8jScHNed~$z7TxJ=A5s_9rs8qxtZ^ z;ZDZoBRz?`svat2vsS_lRWIrL*A7?``M$J@}rAq~{ z)6OA#BS5rVVm=%IHUaQuyXv_v5kZ0r6>|xLkZ$h8^Bz}p6iA`4Ej2%Wdd@i*#Zf6P zRPxYIT&B8_jq{)&@mXglLrHGPLOmJ|fR;P^loD3W9BwlOL1zv~#Dh!VR>v{#h%8$wkaJV9&sR$jci6G;;fAR_e~G91Vl1 zM@_;ycA_f&Tp0~As6a>?fk_TooeezM{KCRGQ1b3a8wOJqwrIBtVSohpZDCH@SXpxz6>K-Z}BA*d9m9#?YeYM z9;HNf36r_}eOK9$qA@%fUeNec{dyrMulysgM@-=)&Uv(aDT}pPbg8K+f3|Ob5p+2t zW0!yV3*ZopkY`BGWhZgG9wPA^u`|`Ygxm`l&7x5>qhj~W5ytXVBOUFxeDRWlWS?<% zNAHg6`!TZID{8+FpR%>n&>i+mEBoKpmFkgQ;%)hzgH`tp|95Ht>3ZLmzA^UY>uypEP z6jISXdUT>ZPwGp~3_?6gZH(mPS7#0wIB0Tm01Rd7&yHnD-U)+zFJ;Z`Ko}?C&_Wt= zAUlzRyKCC*q_^_@fW#M)PGZ*{<{LuJjbH>C6yKcSFvA~fpS4lqa-9ihJ@wRQ*^!(B z-5EG$R57h$eaS zWV_HKwC=fl*!C8ae=Un1JuawjCD7?7l5uw|>(0k&z7FnRF3N-Jhnd+-^@H=Di5wn= z7h&-&YA@^OTmu&A8Ig6<`KP~^Vli!}-s^x5&7s|q2v^j|zl%v$RP=lC%}JF?M~u5X zhB6nji5>PC6?kXf2^08YQ71gl#>~-Z>WuK4%+vm*WBp#MBT7{bL%9Z61GwZ~L3h^Q zohDr1F;TuoMglv`4S(8x?8WuJ+0bJ6t$P8j67V?TK(-me3H=v{)|!I>xns{Dm=w66 z^ilXGzu4so4YNjmGh{(S&Bzbo8vZ1pTF>h)W*N=-kP?yAyIt=w> zD^z@&LQ3jo35zB2x8o}0N7F@Hyq@QeuEQz+SWPXRo}vj!dttjO`HTFKk9sh9{%3Mh!FPmzsYl9%ik8&H_}a#M}HkKF|&1b|QVu3EZfJ z`~x2b$HpeeReCD-5t9hJw&*3M6E*`ZijXzb1)s1sa1Q-LdBlz6R_gSv4&Fq^<-@Kd z0m2n-TV%Gk!^YShsJ{f8;?0K71oUvwy#g27>5S4p1~4{6CN6mO0dRLloFju?A#^#x z?GNJ%HGuPCAw?_x7*bCYCauQ`-$Q+z@J%7Gh^N936V9%WjABDZKEauRfH`VA))^A# zf+&ReR-7RJ5BL)Z!0N>7m-v=fzNqdK8N|~5AWrAZIcvM_8h5bg`=bTFdlu85_V#A$ zZTLh#h^vvQ_T;?2ljh}N)gUCQhrKluXU$K!7~*Mim-mNb{)%6A_8a5%z8IntMxFa^ zwUi*&c65wL_G9Qr;>zc8&}SX?hlxyTu0d?{)XAPhRe#g}Qy8oJ7vCK?M=Q64 zF4M|@cj-$HP1Nnt@ZZv1z3+pEw!C$lRYyWID#_QUw6ur-P#}&xb-90%7M@@zPbIjb zU;ZUHJMslFA`rm_Z_pA{uVb@jdckKfZvF|#J6+~{NeW9KWn>3N^2WizFl4L?*&MQ~ zs}Nv6UDmX%F>qU7N5_T=6;b3x0~%{i*z5psZG~-V&vLfNPzLfZ#sQ7LUJPBBVL+nU ztqst5czW(f=%48k2M29D{o*I4}iDm7h<03rKGq0O)o~D`P2%7c;rdLpVVoI6q{2S3`T3o@|$ZrRa3va&1%ZcjgMn;AuM<{e&y<*y%;Cnp#DV0{U6!Y)J()>+o zNjAucK{g!h?ASmewH+qp%`;#M# zt;|5yySJ|eo4%7Tb_UHXg!|rN$5f3g0RX*K&fC3UY>u+1ht!4pJUrnbS{FbV#>B=J!&D!4 zEug#{B)$g*@Hc#qJU=(infCi==kE(poIYd{w4%*x1}X|S_HQuy8M}tuy_x(;Khj@1 z%I_UDl<*@Gx>sP&YRz2?amZBf4+bEbc2SzAi=WCEVzdDJ4KI;b|7+``Uq-P|?%HJY zlNqwRCBlhsb*!B-nZBF5p)Dcn){lnVQFG$uX*^%rT1?jI({)r-JyfoXhW>x+=Z@f4t2JeMN zovAgYKB2!WM!9H>0_-`Vi()V&3cd$5-HJfb*XZaDWcMpf!I&lW$oIW`dI*Le`)NL2 zULzQ^K#;Zh{(J-KzZ0BZI72!E9ThaUi@`M1uX+M676P*0YnpH$YE*9Pk0B-&3>&1|cI*6)&0GA&f{4zNWTkitf zI8Vvfaw9Y!QM0Wp6OrG{;n6{zJHYkXu4uVy4z;;rtdNnc3a%~8T9(i#a^0W~poAkJ{-lC*`&~Af=%s>M5jKv@J<5;S47Bu;ullp{bcbkt zDhfaW365x4pa^a3?Yl~a85uvrdre%CN!{EOUcBb(`%HkGB}VutjtnWh{+(NDd@{pJ zYn-;P1~evt%Vy0rX5p9&IKkckeDub1~nn{xE8RciNtw$?kOAD0fimK*Rl_I*C9Tno7g zC|Li18nm~g2Na;4>wb7?L81bmPsUZZUiP8DDqNoCS{bU`>w6HAF;`_tK^8k;i<*s> z>;lMTQuZGa-1}Bj74K_nJ(GO!=j+K?FZNjmbD~WN_hlyZ0fou667v6kbr!9Y^?6EO z;df=j_9<8!cx>}_Dr^HI0zcwdA9yk$_CRZ8_j^d~c3f5GO3qcf$@+xwWIAO+{+=)q zosLu47m+pzcv5%UdS^F_+~DL=bW(@8_?co)#WvBeS5Zp9p?7fy=McN;gt-=4C$J+TPxn)NgH#tcr>00XMRDfehT?>&ahH}7Sb*8O}e?~F`uoiry zH>O}Og!Q2hDM}#%VRE4zxVm=>J;UlnY2*JXr8Xz(H_c^pDjDwDR_B@pl2b(<4$s+Wb?x! zTLdHDhJciVhitxMnYy-HS+;_a63Kt#iN^^+>XjFWf5o*{II+vV(rk>k2u22a9@t9Evr3r$XP$&qtt@T#P{kn zRxy!%e466|6YCQrQLBl~&kUw0NGQ}w3CaqN7B(=d9;=t#R`ac;XDdE#HtH0Z>`!rg z>sFE*oRiYi2fob?{mWmyG$eg%_Ec+U;r8>DM>wkkHR$TZexX`hy=5L;+4TRW zN=?253njkIEdjLTb0m&O@b0F6TM5)(`{7c30k;_PIb6O|CX@;Qu8c$dVWmt@jPcst zl0Ak)rI|)ChVbaHiBK!sE8?h;hf1xs-|>=9AZ3FnUh%t$tF$yaKfEJtB1k*s!-qex z>Z2{9pUIN(kw_^2{Hd(Kld85;(MF->FvqVPhOfMly5NKNpd6GDTiC}35NYRqb*ZV@ zzL?Yas3d*KrqERO#UFKmJ?S6lrMz!D9y41P9=NpZE@3e{CI3Y8#B5GE>+2wyMqcMq z6lV>2N}_d7t4M}s{W|TPqGu>REwp=wXBVldMqJtT@&k-PGRlNJWONxn43d>iha0|& zzFi+o^*a_R=mePkyBIOH$x0~JhN#(hDWxDQ8k?dvUdUd^Ou$m zyQMrY5T&`0=D2Jujji56iml`3->G2;Cy(Z2b+0ED8RqXS$JUwVY?tf6Kj$G;f-CE= z_Ai@z=MAnQ9rG$_^y}LK)Cmv*T%ud9un<;Q>2UqP%|V?w)&0@zN<#T|Vq}0cJb(Wy zh8?X$hOkP@ZRb*QQ+osJ)S#4pI}yKtvqpwAGEZ#iT{Y*yaT4)LKi$jh7jZJ(6TGkE#a?Du1kydZ;N z;Ooekn9g^o*C8oOk`Y7+*I(-FP47~UJ{I)~bvqoRMQ|AnK zpJWEtUb(2~=(&Z3CdhOS{wj~Ld!H<@dPTLHW}&c(7(H9ItQM!kOYha|z2E0LooF!n zBN36lH3zUVCj!I*8RoLa##^FQ@S^z=EBc@JGb|6;Uc?dHY4S&4JkX&+?D~I43JaLI z6D3@OsMydsXSMfqmUH={#Fi!ef-EfSv)>5Lh3Rx(Na3pFhh86*Uo^+ZrH4Co z>Tkt-6Tf4{$Bh0+@phbwr}Fxs{IWjBwX>}wVP0_QG*xfcS!^&!_x>sD#a&O%n1zB6x?8&?m6#BUT1Xnc_voZOyg9#=qzbD@boWP7l1ECAuWO z|1R(zAsWH)9z&T^OE~g82eJ@Gg#UffDrMlpv{iaYCikbxyQOG{n${@cfhq~|pLh;5 zKc_{H8g2^FF^94DZd|`l%jhRM`F&R^u#U$u2j#IJTdY`qqS1NJezQsH+>g@KS`*$X zm(j--DuNR;;r2x-GC32Kk!y!cLLqvnzXyU%iSsUFTP!Zq4S3ln&NVU=XvD}j7nyGD zzH+PW{2a(X?zAc_bUTrrT0jKBCxg&Ks0{%#r%F(%tpbr&Ft&<-;IRI}yVw@~in z()$y{LO%hTP43iArv0yM*=tTtD!hE@EHBA_&3$~@QHAo(H;X@5*h|vl%w;o7B5c8B zb9-1o2z}oPcv@zd%ags3tkIz1f4#40dKcHm_Z}dKX%V+R% z$|<8XT1u4h|Dx1U+}o=y=YI9a;jMC*c7({{?}J;afZ0-c?4ZKhkOtHp5`B$k@Bn9s z@11hZ_9BUK2nzXExK`dqsZv+_J1XzR$KU;eZkRVv6VA zzI8a*x|~oDqDfi2U;pW7YME*;simQw$ZwMq=xw7lzFC~EQP^G3&whTHs%-0}Rle>-RAzVDY_#EXs9 zyJb^t${H9f%vuwcClid@{btd*9Y@0>%!VkVsvoBy2=*hDEBpFy+{?Vz1aj;_aguCw zU7yB-e%=6pvAr>|>h4@iFj3mYJ6wA08xR*v$J*-v*PnA%=YXB5SUf?*$sZ;$8kSu+ zLqb785iRB|3P?XA80H_eVH(LrpBc)j%{8C`Kx-di^xElv+>)>DBHhj6Nmoa{QT8en z@`m0ao6qMuoMK#nJm6jM#=ioM;Q{m+g9QWy>3)HSiu~fhpD^UMw;;jxNUp;e5`yc> z%Fk;BQIOkD?CzJjNAuzt4~WnR(b~)sf4m(liE0e^mMbq}Em;Ro*u$ygf8Dptfq}vc z>6~hgM&IhXb=x}>;^DE`cnL}>{rr9zxwK_nTj7ymZ z^3Bw=d3Ty72FR)Rz`l^*`t z#TF9%H(;Cgfi!<$zLMJ+BIg&P%>Wb0*R<&U{C=hO%X2N$7FUwD$b$(Bzf;`La@($I%(V zWB?Nz+ZTXQ_74V?7FIR)=)(}8dv5*4ihD86$keU*(^p~N&vr>2`=-)M!`aVWf9>gn zM{QR?{ahJ9re}5AQvX0KmnHQpHDwRzj!ysXBeW`Pl91~8OJ+JWV}u_8J3am1*FnFGfc`Orm_VSO&F==(>=#)Q z$6M1_ym5GFjs+j2JQlpl=B1ihYX!4&1NW7PU<9q2cyZQ@81fdi1xjnPg-@S9AK#H zd3PHp@R7l81N$b@&$OITB@zWcEK+A zh@96vyy!Kia#7L`DJpx9jF7xUX^S=r z2_lIyUg`&1xU5uBcwC}mdCxql%}yBFWmS5sk2@7_DYr;O2Fl4B=r3O>?a)Dmj)3 zNQZ!QBb}lkEfOLnDIndYl+rETASHq{NJvR{2uMgbNO!{-&sy*M?S1yO&vl(YT(Vf` z`aLnn9P=J`;N}6OhPa2$PWAVL-|GYVTh%oX`NPAbp9S&}T|i%&fBgyrD_zJ}i32e( zCMMJCADl~NX>`cgnD;BgKhl_ARXiA^@sA|H!5-HeP-mdPMy`I)A3+!y@CziY0a`H1_p)8I;aw^a zUp&1)?<;mQ5GJC}yAA{NuHsU!S6UxP1wj38meKnCZbQX?)e-JWxNm=Vycl^qgpt8j zb2p6ue3?_hf2?-0NdG1djjRszzI;!Gut<0zyCWiw%T(o82JS5OsEO%|DOdW|M&Ix} z8tfD|zlg8`rQ!8&SMa%^q0}+l;tntFC+Amb>}X2tndBl9a=HakJLFX%S{=h>?_Or= zm;B`;zr0Tt5ox4m=YszJ5L)*Ae<3uwbmjZu5t|1;EynQvl(NfWvH%rS45oxAU?Yn)s)(lWFgzRsUw{rh zHm?436A|vDjUZqoL<9#SgKu!$0|zSvCMYhR1*3#HXoS3pUih^dWaT||4c+9$6UI%7 zm(@HGMRfeH%UOY1JpWz<)VRX?n>|}KreyrOPCtYLQ=WvdS4hv@8-2E7{oPzK3RvHi zd|Y!MF0ua03f`>ho{te=E7q2I?(MZLFG+9-TJXIEHpPApTl1yaPlp|S7Wy$dsa%a& ziB$JH`DWHP{Ib3YX;~aZ+Lkd!er(NFx@3EEbd96m*1f=ZdxlRt!6dJ}M-o-1$uDd$ z|25;VFbv~Xz>)oTj?l?6DVQ)yK3`H%zQRK}cPrSBigSF+nprReAmaGpeck$zDqC#O z+9+?^%+i2`44dR*E!iV%!f(e3Wt2)N&o%;5{Vzcsf7 zJJ#I+sPY;*t7qlHEQ>D9)sBnazY`Jicb{RHYL?fLIehmO74jQ@?Sjn&tuSBkNRu*( zg*bKL<@|5WRjz=K;JgL6O_2KvAsC@7=E_J*(XvyM8of7VD#f}OwOg6=BpzlFu}VO{ zsHr>~mPP?#`%jp&?U?NXiDjaEIU#QVC*{qNn;wqR+ZVqXhc8F}{G3BrgxM=I#q6UEm9W%Jw8nUjcw;FB_e$^!)bcZl1goq1%fB@FELU?4Gl($nu zl8`AuR9GYg3O>HP=ud`KvWD(bmV=0zNd=<8tZaH?1M@K?4#KXQ1-gE3vKM|0dIfn# zZ^iF#bMSLTjqKHcty~}oB=MGWrX7-B@{QPXx@(DamL)9t_ewU3no8a5H36z<3ou8I z7spXixHScuE056d%9lMu^9G(CX#V&QxexT1vpq({|2+dtUD?z~SAas`AFz>V^O2n>9pq~4M*)nm%ljn7bsHL6o(T%>6(0o%wx}iF%D+*(87Fjs;UT& z4ezPTqCGYk(qV{l?5Q+Q<*Kq2eX8)Yu|1?Cx##Fz<>%pVm97piS5uD2S>i}-}!#U$M$z-Vb-|(omb-l zKk`9qhE}h}1j+86STL5-g8(`@sIBbhkt)_9QgwBZ{;U3P!a^v!L&Nm#7~*8x6l zqcGh4reE0$#RU>mNCi2&@Bp94{ zXIBhEy`AeNt`P1T(KdPhAD|~2gxz#_K-B><6&6odI=elCa^0c$bqE7uBIY)m2A}&L zr9PCctt}l05~v10Wu%GtYWC-vOfI)ZE@O}ViAt)$SQWqW@s_!*Q*lBEYa-pLuMWYe4Ce-f@I+0&t8c@#yP z{}t6oML8*9I0F#7XU;AN52UpI_b(qBF2;Gal;p6etY&4nv_Cv~&t#vV-=a5$I~(yS z@wtk_22S>(;g<@&vDx7hYRb-6)O%Yu_D18r{xGw+cKKmyz_TxwK%@bGFX*>rHq%ix577-Nrg5?*ohM0#-8TmKN0k~F+(m0&NU%y)5**@2+3)Z%w zf5TXsU*dX$0~*Ztd>eYG#EJl3w(DKU$$6@Pz7$*u6N}X`ECap1&2d?&>6(*ubc4Dy zoeeG{B#!v)K!tR^K~7G1a-5vB`VBaG0ICc+==|{@-Qiy{6)M}UEqV81F4aX%)|i!D zxv`BdTk20Y$W`Er9{RV+1qy+8>he3|NHi6uTjN-Dv$-0F{#-GYv6X#8$wgrUG^>$veysdPtyI#tJy*V~5|`_Oi9TDj@JwRR0g>(JN6136X}d1pIuhnt+&?VWDB| zN}KDnS+w643hq)rgMca#6Kxmlll@07%#e)|S&-HOzGcqq--i)06L3th#vMS><$X`q zZx%1UFmbsw2^vszoZy$KYXdE^oedt7U|aT?tNq+AP)fdjwBg4KG6_LpK+^`}kJ5X6q(kYhvJ$v8-Juo7>Ansiv<03@lj*9dd5Jg zDsXnerB->DlBY%s>v5)Z6dF2uLvwR;8|2SWnVHXcVqu&YW_p9y%e7UANr<44(*r*n zu+Uyoj|mH_M+gIh*%4v=a4$@MU8y}&d}a#*z#wR}7kg8BLCEOg?cIF!REZ&g60B(% zCM3_oWUNb2dCAy=-fd{>=g=p%=i`dk4+$Phecas++q-?1{Oa5BqqIdOn`XCLU$}S- zP70q_*H}>npj)TX+74wl`{GzHvOcDsUDI0Ym?Awd0)G`qo>V;h;7zyipC(vvRQY~1 zGbELNwl!+xpr?dS^dat%Q9t-{z_GRgasj?-)$r%!*c_~>)j}{GxWlU!>49e|`t}>Y zAg^8#-(TsQWQf)e^8cWJ-rmH)uqhT5^j|Br2bn+^cmPq%p|w(JW!9eF1AknsI$P9C zL!+I=i+!+o*claZ$H8??a=s9<{?3$089?X4OnkJ{{xw-v?TpdCoMoO#zS{mmTO+%2 z2)s9f5o-0?S^#_YU3njH;6?5mOhHi_$6s64?1MqQ6tM^enT#P#R|?kqLlB$quM!6k zLzjgP+7@frA10mV?jW;|ckhe&#TDL8)d~USt6^+xtc~8=>568pS*$)I3nm^j2lCoc z2WgVAT+_+Qlev1h5Nkq$0^va|+gma1{nWT8`f9PUzXr0n{i)F zE`yt_a>vEFGU|N~_el6YHa*sKDLpe;SQjK_(ozB$!JKjGcZvbK**A8@eX;0B3?bV0_$Mz3kOifsBILMnt)k*x$py4BaS>3&T8JTrF{(PKdn<+xJqGZTcMHzkk={GF8MfzylZ$@h; z*f^dR@Q~0}ACjqd3Vw!p0zI6?fex}@YSgjL*y~M~XV(2~;OE>;P^wVKkUzCqB!sWh zYEL;zzjzC@+de7}Fxxm{n8UKvlUD^<@51OgNlHGO-qWYxbl)>Z>q_NYHPBK@jlT4IU&wu! znc5mkf<^x1#|F*l$u+x^$i#i{G6NmQy(A*ZXcn?ZI5$BGFl+dm5GT0oacCxZ61=p0 zru52klE3f+I4<=UKeFyuzkg6pO{Vm(&)50Ya&XW6aP=kx5dPGYrf!PJ1X$DMXvTMK zvL@lb7&2JzfW;LFxTx_rsd?Q_X&{l*?9fWl+7^T9=bo)QY&gTV5>r=!o!CT=Sx$t<&*c;G!jEdkf(GII-eDO04oY$b5{{u_%Y z-DRp}mYeFg-IbnH1*l9^>{2iUn;{Hz$zTWriv#-0FCdKqPxP5DCMj5DT*1c1-rJZs zEqZcWpA}n!-B+BVw8(3=FTQz8LFYGF*`t1G<_M`^L7Q zuXJoYzF7g6AEhC7Li0L{CE#j-U>R3pJU|1-=i{}KUc`+Ac_E?SrV9uHNl?;9pXVv{ zVETi)f6!V(5SS7WSIHaWJYXnas z$+EM3Zq;ACDw_Ubu^XhZ@KOB6|5u*%*6t(7iuH1OSgM|bkAh;dASEWM3>;`LFE2!P zi2|QUN=iyp0T<}zjXYB>O3SF;bo7);m(>$Ce|4211Wqi|fHXq>s!EuVKG_0`D z0}O=pCH^@|g1l}A^cwnJk+@$GA?!6l!GGKz$fQ1u$@B5^het+U^T^7g&5}<63v7M> z03zb!)&KcyU@wR2eHz+-!~?F70$YVE$0MPudb-^?K>LRb<9zDYurHl@>RT!f<3HNg zXkODBQM8UKU(?B!iVgI@lK?8D0GLfBOowMIipS z8))OBwXOo-_v3;I9`J!ouLX}8-=ZvAT59wD<$GUTIrptgn5-@5lMc!SW}WO3^AuZ+ zuEaZ)+@vgnC^*E?{Ldlo?)b#SZtiMd>Q(sGn~Sq!B>q$hI?xQ$bZlirUiqJb2xzq(D3A++`fY1VQRzyKUjDQ;K@8|a!(gXqBEPbnp zu5blh;67bu1uY&Nc(-;8CLR(yE>@{rwJI+~eEIsW2~(;cL16-lBm0 zxe4dN6crbE(aAvPhwzte1Bk#29vJ)OZ4I{+!Yfq2Q#06W2$v554KH8Prj%u=GbKq& z|JfEdx7H(E;_|8L$1mT%Y8i6=AxQ?#wE^njFa_3IKQ>Qbj0U;FT0zigrYhgJnBWNN zLiwsm2wBM{50OO?$#(0O&#ZS|-ps*teM>i$*IjR>sfAH6ns@=95ug-_N}B}&K_@nM zh~rW)#7PsgF8Lb-0{F&iA}qtSky(@9*=@xm2^t&)YVvqIo1PIv=-L`a&jwv`+Q{VL zqr^kH1<;*qHcx;o1tx{cE+4$;{P*4+otP4Eq4y*sk~rjZi0SIGxe0>f%p1dh$}p}! zFk;lp%-Z2DNWkSl72|`yCljQ&%6aNhp&}Qm`N_EQf;hp^V1GJQyFcCV%zh~@_HC(n za(0BHeVGGU&n%v6U;I3olRb;@m6zR{BoE*mstNND#Ln=|k zS`*BI)k>xhry#18?8P-)cu-=5_7~L9~&>_VX zMkV^&v(3mLmrUp#uJ|(wX$<^tMqlU32$Z_lDH8eQQ>#);D_nxASXF?>Jd$PV4OK z%n4%LzN!{cLVVNw;gmaTuI7_S)&DQMF^#h3uBMi+$I35}Qz1CAtc6xc$1uwfC}nYZ ze&eiHrnv@A5ihi~v=Ct}&!m$ls74lpIF)d{VNy~823Zix3fyiqDzw|4#se5B=H|K` zP}>?2V;-byINQGIvLB_pk@qYZ1OG8%j0Q^`H%>yh@fh?Z>~Grcf)<-V#7z(hyoSA> zamx*lRVxT6We|UPbb4xgv^8_nq7HmUkt_+wodP8ZVk?HY3ho1JC!r9jNej_%@e7Tfs)5Gt-d z6s*_zg$|v{L|ELJQ2?diPVLfiv$iokXGbKYZc{c~N*>Mlp%*6hHB@?3Mr!F_xiuc& z{9N%nKi+u}AFv$YmoIuy{;i@a;Chj+=r!?cE0PKiBz+%CURe=(w0o-mMo0F_txsJK zkE@e?jcj21>p6UGvqZLWm~89hHm8@bE$FKqLYiN~D<)==124Y|^p&8u0EPwwC@RH) zYzFO32KaYvV9IA~WM(E2JfY;cr}a+i)*aa5t@qn$w_1opyjr4O@soMHc=A+?sDg-^ z3(2+sX}vr&1Ixn&OaSX-^8QA_4RUS7Vsl6XEhoXeT7A?TYe z;w~c^M{^qoyZ1M^c0*X890;_z_$oLp!ovD0AC;Oe!F6*FO<+SA_sd%O-T5nFw`^>> z{${%`v<&6(C)LG5JFjV0J7* zCqJK8KB)(m6ZijK!pK<@(jp*yk;DV;IlI{in@WM!x?(So3znCcrKcXz()uE|03a$3 zWJiI8`xGV{YT)4>Kj#m(4q2wrv(S8&M)|k-6fUaSK_~L#ib$i6tWnr{xC0I|km78g_rb!Bi7k zE?AHMm@`7uHhYR^uBEXtJ#@B4AaCvAh-e2}d`hU(xpoPGEI<4lC!Kw{OotH1Wrxkl z?pNBb%>YqtC~g8yMMXn%12UtKtV56?L2USS6!?3xa&v>=q-}J*+anMPNLV58S{NdT zz&GZb-N$|?w=pvXV$bo6lG{;Nc>~roivH}5Cc0r2P0C}NZD<>@H_bNI{m|+j_tc0j z*))f9Tie?doSdZ%r(ZHNUlMM@%oVZbv*qLAiK(dIg-3xl)ciCDS1NMd{Vbl( zN55G_Co9O)K<3t7!AKmOqPqUE<1`~^27c$oB#!-NEfPZ{(mHQ#fNnsH+^KA@z0 zB6SBl8(UjQoW;hWqeEv=R<*3FC@~fdUvqls6I^OjBMjY@zVMQH94(2=><^O)35frD zwSx6vQ|An)YnVaeffWU!7Ly=z_JWyr^er43ln~wGkgDmAw}EZ+879HV32NN+aKh+p z5-;$2V6$TC_#2Lu#e&4}cHFk>3MGnuvtP9EARR66{->s1RvH4o`Ee?NR+-z8GOin zssQ1v)8FhYhMLrSz=ecbHd1Lzivq)z$mrEOAN(b*q=4q}pc=k{ zt5y#V&80gAmT|Z5(WGzqM6+R}KY7w{_>KPh$4@1B(Z?QD>G!a$iOVzXTp^$t=n*o} zOxDrx2&0+Qo`5wq`nO$sy z*MO*SqH1QlmGo9{A)C{-I5j7dDR!-jRQTv8yIAIL^cd|80yk;6@y7Gx?|AG%JQ&e` zNKO5#*gyuU=owNGYzf^M#M~GjpFZ8)s6DlSF6Aj~=U_lAxYzd-m|f)^QVrrgnX#V} zw%8?X=r)D;edNCl%w*;C*Qp6;t=KZNnrU{0&AN1$sTf+G`X9gN-d2sd(40ECBI`4= z!6puu7k%c^*6kh5Oj^U$&^SRF2BqF9{;4lX0na3w<2dR`U8SGj`v{`)ahuzOgoLt* zykTIShS(4T0-O}#yE#=im1)!hlfhtJ0>SgWfs8MZ4S_AHsCW;!g7!lQG6r4z96UR? zaaQfrF@>fSPt*%m2Q)3Wu`ay!#?}-mG>G&J&d@~m@ujtDENV6p{ z^h_v@VKh$5Gxug`+YxJN)x}~dx@UG7Qm0pV&q8ScGT{5u4IbT zoXr^#5h0=4ilC*3KAQjWE;zu<&i>_ojE(g&&sKuy+G~8qtlmuP>>)=j6)-PGala^` zxq>fh?%nd`$FTSz)*>rqqYUq|sOJgasdk|jzp-x4%hH>XlnriFz;9SAt zm?gK1=ykpols(g-vL_SwO!>4)M_S6W>EFcQVw-Hw1&lcPwAeT44ZRBT^Fu>fSYR{7 zb2(*(8TUh40h$~5a3lCOrbS~pZ}pR2-dqY17$*+(^YiT|mB7ZDHb8$kXhUJVa1&)D ziYuw1*tLl#$*X^IZ-&a{w50llS;Cuqce)`3iK(6{iEW)=DwtKUvE{QeeEe>* z5enO3xMPcKMz6KI-SL2$Nyj20x#Tyz#a&fETs-<_c9DC6A(E9pwxMbNc2Xzz8cTt* zLGyT>3t0(krN0<8x0qN4Y6ml`Uj47{UEN1vp41Yg-V&cHGwBU7|a z)DSb_InCZ$o*1k75D1r&RJ5?9us87p9lxtJJ{tWojZR2Nv&hPgvl|#KrPerIv8Vy3q=WjpyS!0OUlygJ8l!F#8H4H8ag%s*N!vE` zZq)0)73|h7idOJIjd7*9b7|iHxwb~#e1AwSl!S6~>hkkibM`go>$;ej_FYx`M{^+{ zwiL^dj(;uQ)F^gSOYU&_a@ySF%46M8mLc+5|J>Q{79ne11(;^(BkB%+-Z?xd&y6j> zGW&M^8CLAywt&QS!HQjN+_8?r#!XjTd0s7tg^%54Q{`c$R_;zK)c;gOqOkBg-`eAM zyr@!j%LOiyx-43iH9u}{ct&QLW#-C&c1{aQWSGed#ivbIy-yB48~e>`#zH}ZmIcm1x?&m2eY3lsxN#M0{jNGkq1I2 zvdR07`_zMNN(Kcs0ZY~Dv~iNCrw&qKwi`hMMVDIIVuGy~XH1U14-_q#d4ELfX&5iu zx>J?pAn+(DQ2au;`cbt&>)9PVlHEkI7^q>2RROgcPHDa^(|*N0d71o={4jmIg9 zs%68>?60wjwbPS{UbVCc#Nd<4&Lnt_6=kQ=DAG#jE|4(N+tCyEp9$6qS4_xZ`wwc5 z4qd^FCdsq9zQ<5vGdyNmdDc3C5}TLLwf3_pKHbR)o?OgPB8FUD$Kuz_ z$2_wuu2t!J(AcX*uLBa8q3F<%2ymn$LGRzkjZbt=Dq{dO+(chWR? zVJhPCDrUDjwMLzy)qcjla?jYG)WFn|V*PR{dQ+AAF9F%|}YjCH9xFk!_i1-!lGW|LV_>sHqU4-m)%hRdf5pYs)`J zFuQ8!X+BoFq9)p!k+|R4Iao-E8E^U-k$j_$gFBgrwaWW;LUP57?N_!KBX_MPgWFn0 zEzy~8@Ro*0+5I>lR@povEI*K8dA-GNX?E&g&Cz=J;e&n~@3-#7NfdD+j~BKm-OO_A zC{wc>&*~1Su;Y&3c%M9)3+o&iWvT98$3QABh2ccTU8rEoLDP4;eii8u9KA7|t2?|q zvL5N<8$MNgJ+}L$*;}O0qM_%Hd5DYsrC@cu`GAsZ$pC%+W1HGqPjY*0;^ox`dDA%P z9S;4}JN7O$X{!%Inno~le=K6x3Ks^8|Ig;w0h?o!z1=|{Zx0*B374rvo#DD_jJ3xo zyhW#t0;p{a%#_Zn=ojBz@0XXY_hj6@`MaInZ_o&iEH771V@ve!$Y7o?7Wa(3nf(#{ zx}rc|Tm2@Ny^?CX6JsEzNH7ADZVeaRapYxd;YXZ9TqXH9n; z9`angICqLGXiu0wIJ8!-dcFXkAu(R#`=h< zmNRbeUi9tvRa7R}Veh0G<^4JD3NNz#cvjQCksEWXNK?{7Uc=GX?%4$|mT$MkQc}_? zDjgH+T+M5d-O2ov%0gpXpYp0t6y3dz!<}QKsR7Cw{@)W})1MoQSWK>@JqXY zFA9GmJXsQV=y=(cT&>@;;|RTV^Q2Udfe5Q*xO$@YYU_jkxA$>>f>2_yJ0z8g%;oOM z;gX(?LW~p>Z&`HZiil0&{8!ruRT(8839XKl_yfweTrwi0ZcQ$Bv)MD?;yu}21S7#~ z*K{h`KKSwN(w}S)!;a;}#r(U{pO4|jWd`v!F)xAAdP|66;Jz=)Yp8?>3Ygxn?&xQJE}%c>YAx zIlJ?Rj-@uNm^h6wJ6d7RemU#_WpnGlm}J^D9^=J&vIHZA0@|K6u5rnqnxHs(Q2Ceh zyOM8iUc`ytRHFN?_WI!d3UQksW{uP5Iq?nNNTq)JB1%`9JC6-%WRo#Y4?YyIf~_sS zmTGGJgaz$4rj+v)-?!L=PjNOM%k9uB$|e7~h_sa-ZthsIHAbnuB-47|{i~$J*2w8E zJz8#T1~kg5w#Ds&P99r}*0E2Y{MNdgedGImqL{y0_`j8&W?+w4uMuJ{^=8>GdzPXf zGx{jhv0T|s4HF!&MU%H9{b_XSHUrOlG%N!3v4tA@<+(X2@~a(x;^%b5?l?(a+pRz) z$E}|`&OHXi4l%>k#d;gs!l9!v<6fi!O72=+A3m<3pGx_ewVRu>&UY+sJ0(1SL{s3G zNPK$O*yJBlRMgl2*6(izzVzq#=-&N%|8e#!v{WtBAEHPDuj;ageYPYLlEPYf?OEsz z@)YTBL_Qzrd9O9duwD24TJwO?2Ji21k{ot+QM$N@4euq@&z|Hp_d@z};f`Fj4b0V` zYwt8F@?K-r64|kmV^nyqs@q#*PRyAe%+4+kvFw+~1kja_j$`MbQ4iU`6GQRH`5{Ke z>h7|Cg^7kW#_JglxyVL{{PY+F7F+pcN@b2XEj8!(bkr-Y8BnBT!6{S_tggO#no`Yf zZaM`A!YDn5137w729Io$5e~n;DKDsC07#Zt5A*)uUE}En7uGLw%js)9Zr2N~bC)HC zNPZ-f&8_731EgUf@e&xpLAE8;v?jd`Hjiv)PVBJ=i=Ty5_Zz60C z#t$3LPWl-aah@=R+1jS1Jz+$F=QCRQD@p2A+pN*)|peu?VZlRDAGAtS6`;_uJ z&+7*7;z%0{z2cKO5|Dcm!o0busY!r4s9IteZ2KB`!J$G2 z1i*uNKttmLFe_NQYWPY#ux~N(hPhSUrFV?B50%9De}IueWv4T1)iMm2GEV%0lbXrj zY>Ixrqj%y1LML@=J@NGyAZXwxdNI^jG+1i2y0_US`tB~(-Pzur8!DTq z{BG{pk8XQ>^3$PKurMkm^vjh#$_J4L?gRlQqcg8#QuhV~EskRpH*si=%OIw)-#Ij$}98 z?|8()k@!^A--zAD1}%~O>Ta^o^txRARje07CA1k?!%o`V3BiepBx?==G!gty1C?WX zl3J)(3k=t(utSS%sAW?yGc)^Yg+RqcL4kmt845^85*Dx}|FBR^P#{DuQ~$`8z~@#~ zjjo8${dMxLwxh|Qpv~_3k7j_fd`ubr){scaYN^)2s%Cu6S)<}%4%Zau#Z#Sqz~7aC z6X%RC{a;{tVd2GMr;tJApt{Qm_Ce)=FC(7(6Yey!!SSmvSS^gGuX4Z% zlG%O;TnmfWDE!Ip>ob8_^IpIwrGKi}&u?s{Cqa^Gafwu7`0MEX4qIurmNz?;P5r;` zDEtmyL`@whO?iRhexfsfl1rP1T1=X0N@3I+w5o41aaV0PG-YBlAm!=2(L005=^ z4^K);mX3A4wGNV}*<2Caa%8FETS5fS=8mx>3qx584r)TEvjtS&IpJ3t{3hpU@v2W7 zjJ%OB2+4(dAQs(Q9S#M`MTGzf9N=B-b%>|FeEIZRQS57TD5KsSA<^)5;zKm#KD0!W zk2rBg?J&G!=i0^k;$Gr_td{BZIVPU-!{Pyd3gE^3 z-C)b$#vsR}iI}7D_W1-Sw=j{NeaPw3Syw+ANSVeGIujRba*Z6ki;dsfA(GMbcR9O| zan&H?eqM&sQJ`uyEmrw4LnhOaEm?1>%93`lhcekt>F z`L~P>7ADVDZnqF0gI0im*XjYRNC#7kumLJnmRaTMn=4}*F}=fua9ch9htjy zYuT9-$%Jk#+F26G3>Ol_Cv(Qxxt6>?v_K5(QDCUi0F#5a6F$>bmaW$~F%!YsE)+CyU_iBs2_U;vl*0SBd zC&bM9gW2p*V155*ax(MBwBS0)+z(c@VlPP5?5nA7uY@^Yf@OnIN_}Pwe@o-gKs2(C z7w_4%KAiT&3-3OF$I7ybh5N?#^MeGkneNx&M3tG%NK3Z+glgsP zJ*hyC7hU#jVR~|l8L=YnZr%ot54mdj3-i=>6B7lkPdoN%TuO~>qwL3S`VKsz43SA^ zoLp9ZlJ17EeZl+EktLMy(G3r5s%IeBv9#@EF8>6a_a=WFDH)k`_!%b6io%BZ8O6|r zg?iiG`k}n(h3oF8#F}oE$iNK%_hL7p>aCCZdRr=aBsHAKWC(!i#E*c9DN>W$65xx~ zikeJnIvCdYb!$Bl9ko15A|)2V*|=@d^SzszHg<=B0_GDVOzSgOySnfFI)6IaW|nN zWVnv`6s_R=B)#)}*)8;Rb1PnR&-i{C-b9rTa&HN7pY;)9xGKlRcS%`qO^X7ZV4?UN zJUgW1HqRb%zu=9`$=3*zKbl&d=FVo% zFBxA`M)+@$t1elm$XwMIU^@*@V34~qQiwCZuCdZI*nQ?{yhz^FR}q2|IhuPG9U(fV zGo0mCcBxCb1M6k1cwqFv*M>YIPTSJu-hkBHPs=tzNjJ4yzb@*-_pfaAJ;nN!twcLa;ObJb5Ut@C z8Y*Id5mhv{4q)&+JQF#li>72w5+cQ@{UaTJKmJ#NhrC`9cc+&F+@zvp|o& z0XJdFexPH7P&`yrB9&Kvo{Yh)@k4+CSzr(A&aM>HEFM+5yE`4}O7uR5?>U{6;f0yp z+B25#jSZFTybO}OeCQ@!vv+;J;=zFI^mo^w4^9jtzliFRd1StXBnB3zGK3omAF?c#lq)4WH7a#`+5 zJ@sY7iscmQ<<4!&23C(R(I^eBSKpd!XF0-S5i7Inky047TyNW}j3n)_V(DG*MYnH| z(2_ebCBL%OPMd=KQvSLNUcB3&CBoI{guTpK?XYp?^;$>w!*8X2`pw<|kVYtIJ_c-; zu=yAuIFz8jcI^5;_G~SN2YEcIi5_zP{G~)SI4iO1@gn}F*U(!H?Cd33nxj>1-5wGN ztl%$l{~f^Y_WCmBd(g;D_qyK@I(tZ|XbD(8ExTRjMTmhw?*G-+_|S%@mmz!k&&qPkwVTAqcJJs6_(8w zec9LZcO7umI!6-RgKCajH9Ic#O8RMw7wDHVw$N|eNSU04S+=TN+gty*S#C-BWwABOL!8lP#rNDgy9L(|rkAs= z4a~aV2LbXQ9ko!wjtayg*_3=@Tx6ShqMJH5b5mPef_Ns?cTnXncD65UGhsxXOqvb; ze4Ni)p?=KX2pjw=BW(C+{VC;J@_W`D?;M+H3mJJLO49`U19$ioBPG-o$c_!DS@xf} zr|`^<4dN~ps=4EG8S)Zdo&>>*pJ?DyjxT3Xb7iWmNc9glz_9VbbrIpOZ_-ajNe`V) z>9I084q+{idHcO5ty?|Y=||th{;PD{G^UH^4zGyc!R@p-ceY#D_kI13>9(IkhqZUS z_Iu#@xWzw>^ReqPE4K0UnnSDrJ{}XCghz9t(lvSL;jBJiq2Ef{(WJkd)13c85qgrUO; z$?+)sxy=R7lo%cg;oAF?bJ5%r6`Vir$T7KbZf{e;8GtQNWRk}Adq_DAPPx3iWX@2b z4y9smREz-4k!EpIQMK`aKg?WVlKk`H9JC;yMEI2s&cWUBmB@nTdkPV!lw6DEn3*&} zTqOIMENWd^P1n5c+V;O0LrKM}V|Z`kp|5u|eIc|JY$*zhNb$!Z+d;974SxQ^&#nG5 zMev0k%|Ao}5aDR&wb#FY)|Aw&z?Znvv<_9|K3+qqaiMj(KApcp`g8~5Kl=|S2e1zz zONNiY2g%<)-c7k&+xRs9Mx*TKVD^g_keU-RNu;3g50|9ppS?Q{zLg*2+jgvJY)jjPzjsgO$12=CycH3zDOn|0#0tkKkMj=}t3{PA zWw!SNvd^dIBVm|7A9p7(hl^Ic{m**OLcizAyvsh$$of2srnh!XS&uzXT9?4!I`@0* zH$@Gc^1#Hr`S~~D5fMe|c^jCSGt%Z>Y;^$y^1RPGOHOpwHF{eFe;8dF$*paJF0#bum00``0y8f5IUA3FLb6}Is@)+9GQ zE*PXTEiV7LBb~Ewl!*Ic-?9hA%)1%ad~}70@y}j@KBX2)RZ?0v?GWH9e&Ft_9-x|W zK?r&yXgv%-Y>Iq5^xK}X1g&@P4P-GQXq=JRXt4pXU$Aj-Jaq{W*7yCt4{2!!z>OI! zxRIiqaWm8nBExJ3AxH#_DQKf#ay?=pBge7!sYQ>|Z98&wuA!f3&^T zFGGZ*0nno~(1ZYWpyuPN?B7S6wvY}MVS<9~J4lo`z~#7qU;q^)i#S*!XKaX(If#fr zvmhMXz4$}$8eiOb#>Mjb*i##haa$~cEO!EVK;N*+F-7dFt=axB2v8gPKJ?9tNL_QTRcEWO#d^V%kkvr;?@YEk*jRto%!wBz@&l(J7j=68CeRqU2 zo(^PfvHMn0*zipx;x<}3oY#Ce9`FISXee~+yKPjVY;e~gGhj|r#QnkrJla);eJLn| zy(cgso?cBYgct{f_WLSS*q%Yaa|k%zNJgz1b;Z`#98Puu!3{MQ45EQbNR7x4fl_;Y z`r>qR34Yz2PJX|U3PwIJHwrQ`ynsczw6jgt3%Eo=x5H5Dz}JO&9odwS^AG`&TX#JC z(6&DJ{{EK}T7{R5n#2<Kx!%Q`v8N&^DtQgJ7XZS?8sDGq7gjFnhV^ynK3{w<=ks|DFAuHW)LtxKeQsW^Mk z8t|$wza5A3b~Wl#`JTZ_rEaMrI2kVWLMF;mFwf)zUS0wZ2!RJbhPJC%f0*RwhG=Or==uE09Wro?3E$-Fu8#C zVjLXPU)IXQA4SoHyzB)4k61P#v>RxJ@G;nR8@!YbhWZ9@`w0fc4+IM4%cp1wd~Gx^ z$3lh0)h~?f&=Nk@Iet@rubmx-2x@>QW<4OsgDfqKMyum0u)ahhJjo&H+yE@W&-M%4 ze>>_@Wlhx$287>hy{q)?m4Huh#S$3;NoVu`)m!hoy8lTda6=Gmr*@*!rJHN`Q$)nm z$B$8L7rV0AG*{a}Aih>g#jO_!K<&@jR`<2I#e9CwISf z8#c0GKtp3^tCA6q+!F}1|6lin7@~}T8LMg2P7PeAj?PZ!=?^!7e(#dGZ_Z8ZemXW) z0Z{D9cx6ea8;CQcWMwfDcq|v8-$Wh}h%zcVoc1}rId9lUoQI?ZAG-4?9`N&(S)#VF zs5Sm+9KO4;ZxkQztINUpA)(415McL<-4u_x_xTL|{{G-Lc!jt1esoLr4|yd&WsT>U zK;O;C;H5cm*;A-kru=79P!xTMWvJ4kSEN&<$`F|#OEZxxfCKR%Q(o@n+~%ei2KU>RbzBxjK3_Wq*53K*12!_ z015pv5+9JH$k8#C=8;*7l^}*QHTnJMg3>1c<)JZpm8$@1L)j@*;0l{gG z_#@YPm}vnER%E0|QMEQfZiVVCId^i!H1>BgPZ*^A!0X_%pGL{Z6jjERjk{7*v=Jf+ zHVBifip{n`ufZt(n^NQJ?96^LH+L+h{OKYshQDjk5m8%!c1DR7beF^6lT zjj#?NNg^GTU5Ew*5QV-HBY6-Dbb}m=o{mmuzAelmNSzJi-@!ky+Zh60FeB5AUps(O2yRGiBMq7dvttU1n&kCvI3Z^F~k7CnlJ=ELrsTK15K;? z>l?=p#JnD)>@H)Ullkt5G#Ros_X_kFxK`$&uqt^Zq~TzPsh7XSwp+as76#FkEViaV z8t>=L)hG`E{hdHeD3*YbUK84++o`!lPQS@&8lYcZYM?_kYvWu#(7(3MFJ_ zMMIH{qR7lhLiQ%3VKqp|rXi9LvPV`KSw+cS*_$%TcwVQj`@Zh`e(vM>zsy$~SdrK5aMUzM^QlLve}snRyWZLReWlRkW{f z)7E@u5DF?w-OLdU<3HJ_e-EI<;pWs>+wVTF{!}TdI|KyKq^+O|F*34hR9xF%?Umy+ zUa_yN-$3ax)Cd|h#8@6+#jo1Ch8gw}HGPn<3Hz(cv>RL2G49k-6^oJ%xL;~{jrrH% zwBu!mcU!D(#e2*+ueK?*V4d{J(4~oH%7fu$3h=xPX-8f4X0C`jY%eWk3c7^{dLFNF z^i+ADM$P#!@`?u4{w!^Gi8&@SN*!6XPX~|40+?%jNSc*H$!?!e%~krN3>HNih6aFF zR&HTDQW~TzzJq~Bv-rpGxd*wbr)1*v(H4=B)Sc~?ZseR}gs*gbOdCAjJ-B!8w?`{` zP8n@(rzc*_B^PnY=?zfQT38f|DPlTY+;N^j1afMmmw!6Vb?_FiPWPmTk1yHU;bA#O zVE*Y2=~CNmjXfPOL`gO?Ja(v_Yw7XD#FO8g$M#p7FIwC`CnNJ}{J{|>LjxbLl6!pO z9Atm6vz62f71G6~ULW~<4yf+g^XHR%fM}E%38Ct;_;Fn%U^YT3AfRPB*gsl8Wp=0M-FgTRS_~mP*q1Y71?kO9i&UhK^%b ziucU*FR^a68E6pr&UdMxiB?0Voli_G90C4owWR!$)0o>j{_(MYY*{JOtO@Sv%cQQo zJvWzX_wXep1v^VgcJ|)1CRIEd51;BD^T_*bY;)Ybw`6l(b7x%5*uTs+BcGLbK%Cm- zHA1}i(7)2icSy!wA_tQPf|d}MXEauk9~>-9@d*BiS8q(WttHSv?)9wJ3|sjruy z$f)X7$vQGfF0hp@ZEnhbDPsC12Mryywa<)Ah10jmqR~rIP7Q_A#M2FmS&^`~Zg&{B zQP}GK<7NI{6d5TKdJ<&fO43@+ifKCanso2oc=@636gf8DHQHZvH)`}P?kmZ}1!58D zbUn!#iw}46dE8_>xt4VKIz77V;WZd@d+CO+-_5F^ry7EPR#@q%Fz(5}Tk41Za@3rD z-?WI=I{&J2%7Nw3kj4s%C|Em0+6J7jwC>uBH!_(9GODorU#4O4**lz&#=s-Exkgwrwcx>{Q3EsX! z2Uj|IM5PVH3k{|AiVUfR+Q0AjxfCKE)>JT$b>zNomKkCtx~iHLCtdDcmQ8Flib0Q8 zY_DbFtbySQQbzrhYZ;uyhl3l477~q(E%Q-WKVooFeF z@=&=@6bg^^E%IpOW48GnG5{&-1 z|7+Qp=-hT!+pCRDlPW;mZ1$Hvb;Z`1C%ZpQY&q4=Pl2QK@{w6$Q}Wo?gs0iOFkDQ@ z*YkXokGtDE>T%$Mt8_i+iJGkE)GD z&k0$^r*Rv23#fFRNJ~rq6fmgJ>F~Y4UTGsmC&STLo^45jfSR@}eXEgdc*18q_ zK*914XewE>UQpJ)i#^nDQ9un%_n4!?k}vR&Z2_J@pQUr)oaxL(lT14OjL@%|uv`|? zKbxm!>p35{Wvdgo92q4pF3;89(pLO)L#Ky+><1Tq?CWl%%D13`yMd_`LQzJ@Oyxx;v%JdLsji5|lG4r$eq|6;xL!wVo zpYF&fITzNKd4?uum}BiMKJbK%U{BB!(HPpf(mQ$f;(igPe)AI*R6QR1E8c0E2S0ke z%0Eicb*%c6vFTAY>OkB0CnN}DS=~u8cC$EduqeEEQVzzba!aI0NsuziZd7!e8xu}O zimVw422#bHyzDj%C~OVw2(Q7+T5y|<^*)~Kbtq0yMVDUQ*X}_=0{?iX);09xBI~j) zYsA8wjrsyRyD+|d5J$<%a`)&q3|-BGLt$;EuWG(a!aE{e^XVT%+IBm~3ka3uh>Eic zIrYl=EoMbg_e;I5?BtEFiFlAL7EHD(!@jiHP(&ljez_pBk$3EjrjL}2W(nPuoi|SA zcs%>K;=ha6&|^XiAN2IMUMjOmDJoTn~W z&f5Fbdicd*7a=Fte4^55B13c@6FD-fPS|w|V}3!k_SXvTt8l%cmiFt5fkI`qX?cuR z^qOzoi=^6Mo#npbgpOPaEt!#TNImFLbi$A*I4E)?q9XG9D7bjM7%du@2t=YvU~Ft( zi4QF|s_b8+$yBM%lpAJoJ0>~AkGB$?6?4Ms= zZ^x)02?vdnGteJ766|YTj`#217*pDf4PCNmJY6V;6JRcMKF3#%S#Y=1zH;|d*)OyG zez4vx@k?pV-OqSNO#Ej5X5N9&AD?yxPuLm8v48Gv8K3I$L8WkMHrC$=w@kDLXvuD* z$xpX%q@>-orNv(?E5@B6%hGYpJrgH4=eB+r+3e`q!AkXoH@d(qxL%aVbKeUTn z82v4lvNlTa8)^0tTu?zi)e%*4eE!Uy8_nlGATYmU+3h0pqlAfHNw-}Cpc`B~6 zs?MQ3_**W`jT_y5oO{?x{)+k*hdFUbvBVV2dc87aTdawl%)WLYS*orqbPsm4zj*!f zM(oE8eTzm1z8fdJk#Oa>**Qo}ZM8peg`j0 zhfvd3hy^_?zJl&Bvm{4L7}u$v_G`CSA{L>EW~DPZ=jEcx=WV+l=}yt+zKDz(o%xs* z6k8a5v?aA?vcsD#QB0x4q-Yu-bw)Ktb(Ov_|*A{#*@ukwqyh(_V7r zmc)c-cIT}#K~IRVW(|lCugVE&$q1pFzbDpbw5*L}=MZinjs^AN_*0e9Z^?3lo~7T8 zqjOhuDLI0Bw%EUJ>zxrQP%MGhC&3xkM`NwpJlAu}8&VELZqtz_2me@& zV;#(4zD+65p3%_UoSz`4(+{8UPENcqsw~_Y%F6zHN((Qb$B?4zy3IJt3Z_b!c(mnTUS7A$R>)^|khEg;w!y#q zQxCmp<^Y(6da2t1(%$b@nV&slGFpqqDy^NJr`m1(tBP0Wj>xi~G~bAxpQ<*cqy5I!f9X>IXL6c)+7Su8H$1k7^lL-SxDE#Xs-+eT0f9 zRWAyUHLPF1FgL!0gG0Gj^GK?u9UE^ST zfjxnWekkVvH46pI!o6K<*^rE+qLXNtLPIL6h@0CW| z>EP9K*MlGF6vRB6DdaV~_mT3Pw!0^Chz9lr-d3-p!`jB#ZB;xiz{PJaP#_(gAW<^N z40))#ARO*>mMi0GpE<>T+$?bAWL?Ym0alMCYp;G;R=1|ThRGC>g;BS2S6@6CJ5pyR zPFvQ%6-@4*%7=+kHEtnurou$nvpDZO2Lc-UHF+(TX*x?70U>tWZlP8v#72uHfyKMJ z_~TBQ{5RHhkrOYzKL5URRY9rB#YWzpBh`*u#B~cv$>S7WExM3NV-PivEI4QGX&97T z=hnMr%0k)GQDUh;OHZJnenC`VYbddokxwiAkmN^T=8LA$@Xi%>gaCC;s};4e((g)b z6oVM-X-BQqw6>h;cdg{>xQUX_9gk-%zbgQ_@Q@_4YCu5wI8zgy$Jg;=Hl54Aw#!_Q=N7yN0Dx)jJsd z6EelU-VJ@eS6tB^?CUARzK-cgmHC{&R(Z#Vs_A~`x?*@;E4i1yEZUx5biZ3YpZ|l3 zf9gY5J}qpdm~^iZ;(^Qnw&zw`v_8v5U-;h)LgxH$S+MC{`SR)_uMwCN_Mxuw?c1g7 z52o2|2QuSTFJ*<6&baLi8GrHQ`Vly4u7lD;?H;}_63kS}4wL;DmQkDDI&(JrVB+~M zxYIfbcvS!X*?Pw%8X^Xs_{I-x-rN|fk4#nMKkKNlx$4HxJV0!s>oU9_)B{7UncS<$ z^}$o?81fVS#wt{D5H|hZLPX5m1v{iucVVQaca(Pa_q$644>+#ZI(^O!t!^Q`Z60oj zz-;MlnZ%2@IPg=mp3~GEQCS&H<=i32rhnDOUxUAvu%~^`44W~p& zrOcJ{3!UB|yI1t<8obf-RjY^6gP4Q-TshuF8N_k?YW#D~!LF5fB0d))L!X4;%_gdb zV1*2N_^=oiLxSxDBC?QLR(mP!IHCQ)pZTMfW#Ri%bUNk!%n~`^xZvM@8ycM#Tw~F! zBUnSOprnuZZ3@c0HXv%rfcCNYQFm%;M_^~ zm#|6sA;eY*UO5en5`wI_C)YW>j9d{%T-0w{`aF)ezxSP8n}&>DLi^*hE+wHt+wxX| zg;NXXG{KMSUM)_kW#nlADzAZ|8$} z9#>ktVM2|@bK{J~4O-$7q00rm2RnjVrYgGfg;zSwaj@rqe;@ZRud*(A%=^XnjHKpq z(t@dV*V~7}zS}l&<;=*h6*1UjuFPI_T5*=mXA{{KuRwaV60V8-j#dzZ;MJOLAW)1@ zb0r}(zxm!RJtD~8A;PQnauq}P*3<9l1aB-@kw9ne*X6K>+y`RY4PL z_u=B{oiku690Dh5l3FrBOOxsL4Hlr}dO8)e6Zxl`%}pb}>Ak~ACPfggkZuzDXg)V1 zZ~3e3Kt=I==24Az^I6aPFp;8B6Hij0aNKG4@i0YqbZCag5~&y&U900xH*b*sb(y)G zZgSv_iT(MIpC|U-TiEX=B;rclKeuk?ta3Eh;`4&fmtSeCtHf;L+%LSYK%${((q^eZ zvv;y%E5^I=xg`evU%z*ecYJydG|}9Iy~0*0heJPxdj9fDOH=D>qK1cBTGI9^gpnLS zzUB$PM_FsBJpZpvFSWc{Vuwjg7L-K@n-K)m`HqtVNvTD>fXAiINkP^@SwmyDpC9K3 zdTDf~2n->?*&>j5UPa}^nKOPM9%;7TTZd*~M$5sD({V*<0LG6yEGwR6=I=4Oqqg8W zx@(d3`rJ&HkFL0k^&*@6Yyak&()Yn_NsYgIHY62NC1pNTG5g>rpcOvrHdq^35k0S6 zJIK2FU3=1yKW8+}s%~`T?9#?vC?$KXmOo0*y)zzp*Ln7ZLbbf|_5)ugNA{dn{>ffA zl_a>hN82efqTA5F+`#`sGdIJ%+}%aG$=Xt!(y3qV{K9nfwemao?Ls7mT>I|)BjKu- z;#V)hz}B79*IpWm`ZhdDdRUKKMyRdUJfrM-FJHs+YnaIc>2AuM^x@xILTwF7$5jQX?`Knc zf6?`PaO1kDi{=$p?zo2FiP!T>HPVf2`gEzvN(niYN+y2MM=n=iF|ZN5ame4cq{4um zuVUm;!_@3-Uz+hRt3~q7Is?fs#cq>4r{lR1Q*pRJd1t7^3T^N)(&391<6HhP+*A;m z;q%iE*g+@*3Uk~1*_Gv|Z>U^;-jeqoAqK2Z*BwT+YGHser)N_w5zdi6YUA9R>i6Po z=nMvn1kJu4!c+ypjuId_Lqu&T`FrK;TP*==uH?(y#~;w=;M3cg(NZF9c$GOa}J zWZi+KbZ0uF7YMr$Z zVNs?r+x7B?rQ|3~v263qV&=K7p;exqI-7G{j>^0)gK+M(z^$R;q;DZd z#}64KUmmd?gCkf_d z5rHlt$(Vy{=3L8jSxQhi>5Rn1#l3-209s0}T(SoqjW$6V5VdyN|PkOZb%Z$u1fu@#BSZp z+YkJSyGx}5$yQ5@+edqVpLp{2ZPSYy(0)Z)Oe2|HCUVxKRmd5MsZ5U)ZWo1qQ0hPYKv(N2N!kz-MTq3mXtw|Xm z60+bl(v(6x`Y-mDXJ#)e!#`mbWhn2Gm>chddquM{rb>f~ zeJUNJ&sKKeW8Jg=iNY9+^@w+z>?9!?-2j`1#_R@y>eVbg&d%D!Ac&SYYA3yCR#xWU z1w}>GK+&rPe)#bA<}F3HZ{K#F?{Fm0VTAe-@KC{#?EhSYD5Ca6z1Tskt_|7~ZR=`r z0CC{TL7=58yVleqwpOJzgF8{Q`^Po1k*Ht(^NQ0YiUNOM|I=S%$ZP=lXC#$ z83RQ`=?U&G(gN10Rt5~ zWGTuHNA(xF{4@Y~%;NrQ7fH6dy!%f@x<<7pe`}9TD$EB)MxF(}3tBvY$t=7!cxYQs zQ`eTK66^nk#^C3jNmU^8bah$y`T1R31{^PD(waU?anFu%;E`w?H_Ove9EnY_Ff8(R zDV^_T{c=hn#-TH+R?2j=JyGdsv_y>UM3aN0Dfb1XaWC&i-3G_bMB%K-f>xJUZ->qc z2`P1q^zwBPeP$ebYh#g5htNve}NsFi z0T*Ihw`Jx>7tPj@EC0S>*V#1TU}_x87_-pgB(Feh)2M%PELHRglwj3k6q?Tk3O=`$ zm0_T98}w_6s$)#N?}E%Z=)D1fcDryiA#lU20x0+6_dT)U^7~Xssyez&m;T_;K25dmdV`%<8&mu=Epi#6VC^&(1zK zYfi!7M+3&+eWN5y`FxwATDPu4EF#1JfXiNJv4@4xBmP7lk$jF5tx!x-^o)+4J#k{4 z+1JsvJfT8_kRLgt2tgUW ziJU>k#YLP%Lqh{xk#u7n`Jqis7uPNZR5sERM8_ z(Zj=o&>$rNE2;tvJiKpNeSJVuN}hb5%Uxf&+jd#aHLr#=50dMcM@x`^@Ku@+VgZi1 z0^{4@XpJ46IwfN&Grln}LRIT{9W9pZ4g#dFNHPozo@LZr5eN(mtHgu~sqejeeZYV` z%gLcn`ce%zEBFu{)hn^(EY3%pUA<_>6Y^o~dL47|;+?BtvBNSD?lp5)l5b^;%3oh9 zV#>ymguU1OKiL=EBuF3;u0Pc9)NZnqIjE|<_~!b>#@Lnb{#Ni;?eXHx3Z58v7BO#? z0r(U)%NU6^NafUn<)H<=2Wp2w~XT;9OB=@YyV37*FhhS&ollctV(zs>c9^&UC!*n3A3k7PY}!) zJ0NibQpU#I0^@oXz$N8Cng=8@vAg~&IiHxBzD39>(A-YH+^lNAZ9akH*#a()@&o@u#@nH~-k?ar-c*Uq1(250v(pgG|WO3TV} zhvOp`K#-s_q8KCDl6x6ICZfE7p+-#qrS?BZSufD`IJVh`s*fF{zH&byc!SR$$O+as zv}>>)C-@ntAm3Vctppoomeki%N=v`fbPMQHQBi?rOfS~55961F{lg(J6p1t|NQG5J zMqmbYvWuFUcLM?f)*e!9@Sxc$UUHO8B%s5Xjc$48A&nZ00Eh&cAZ-9{$3T?1df(q0 zZcGTs07GbOJ^teW;M4E#(ze?->j1VlQx5__HGd3}f^Y&xoq-jQheS@~OhCi(u0{x% ztP^!nCiIY35kq3qA=umaHhBIY_-ANhwnv|CYAC?C`fDxBQP%m*L`oxE$490Vos5X)?%q%A7i7 z{d+tc>AiDnJ)3~QBUCX7E&zV(UYRV54)_rX;DdM&lx{?Jh+3T0Vt;6ce%Yo_bMmro zK$E+bJ$})E@gAmt7Iq9-iz0@k;Bxj#NyUKT_KQ(|pR%JQd?h}Lnt`EOuwL?{v$Gh$ zt)c(~3t4s+oY*@8`_z78wie2_J9h2*Al$5tQ43eniXfKN#d?ngh#5r1_QOOf=XX?1 z2vkNELX4E)b)u8IaUH%#RCN6Vh zRSkj(crX<8k2DYE1k6($;NjWN#}@#+<{QYj z613J}`ra*W#|6ALZ<|7lWcX-GzCYAzteSNkkwabUED)O=s5Qj85hR}|?9BG~u;yJy z!qR0u)W8Ohi+olvOvq#h7F1X)SzN&m`$YW%XknQQbxSv9!<^@cL5P zEocI5qGVA39WueO0XAnr=%72eKc>^M`(%-V&vxcKq{1iJjt=)vC9?78V51amKznw#C>z({y8 zECoh!Vo(zm@zo_Z!h`76ZR|fI37RTSLNp0t%R1E`0$A%8ru*P~iVf1B*duFa_ttiq z^&=Dp>rf&@O6f?UBkcCePHOpwsuXfY0v3!x_-b%ig9t)3>bIDCASEJ$>moEecYdFd z@~~?&&z?reHjACQ&cBn9K>p+Gk`ml539j=K#P+WbeD>5h@nQzO$?QQn*uobt~wSTpxi;P91O#3 z(;uSVi|>yj=SSm3%o9a9NO)qzV`X{LL&7Ke7uJ$cV#JPyJfJVO!DkdR8r}Vg6E+qw z9?N#p8$9F%sumfB6%@bbCod1@p^z^=3LI^6DRzCcAjf@x!lY(Ot<2MtSDTHP%QGb9Ug6;}s+Gg6U{!2{bSvMOld%4_CD` zyvAgLFZZ=(=3ObbUsG5e8+=)DzA5~Fy$HopAI6bZNbX6HH^q@=LiVSx&xQdTBPd&- zR%r!fI04)xoH`;*wWJWdwYkZz$#gdq3Ao1u5&aT{b&OK0UbrBG0;iYvN0{?~D!XU4 zT5x25M?3IF)?SSeF~<$^X6YR#+RvOj_wb|p-Fx?Szyq%IUq!eXS*Z zzj^Ch{i*H}!b;%eQFwT)9_%_xfT@<#mT9P{&Lxj4vGDfVWnFJM0fkFE5hBKvype;t z9?SM5L@DX$e0v#Of;9XN;SsgH8&RgDo+o14X(F1AAtNJug0EcjiarR!nG0(VKtCTy5Mra^1RHPI- z7E45=11~azwf_uKVz?-b&e-ox-*QmIqSIs5jm340U)O!n2vxPS=gv{_75>+9!~K1R*TT949;KTI?dcwS0!!hFFp&ccNOjdht&Y$cv& z@P;9FWv@F@8zCtouVKfu2pbj;dSjoUWgtIFAje^9<4UO%E3gmrGBuTd>k@`T87N|+ zM1+Brc5E$PYr+8&`A`4FjMS$9$f~;pxxMjUk>j=XlD`R-j_fUK!Omf7iY=v zFS!ADf}9^OuYGw%OXPyW!asv2JuM&>rXZGFdascwqG?$;I7Wv~@ra#3^pzH=~&d3W&VuL{Vq`}}O-(I%{NOBh z>lAn_(olcHr&Yk7T}GJ*rqkFBg9q#}TwruYESk3@Vbx87y+;)6h#@0rCE$S9L%byy zh01SP=8Smoc-zibss|1p>_z|d3g!yO+Oh*68T-azTys#gjo3_!$XHFnkcuR7q8=)T zQ=+}1gHaygRv(NhVz!AGNFYRrK-q1DNxF2)E-5hJlMoUR@TYxHgh;^D<97PPo|v{0 zou94|tbgM;sf!rQwOT(IaRrW`0m%H@9oo!!M-2=NAn3Od$Dcd3;1RD*-c{egLnLxN* increase data:params ratio from compute optimal 10.5 (default) to 12) +torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.base_train -- --depth=24 --target-param-data-ratio=12 --run=$WANDB_RUN # evaluate the model on a larger chunk of train/val data and draw some samples torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.base_loss # evaluate the model on CORE tasks torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.base_eval # ----------------------------------------------------------------------------- -# Midtraining (teach the model conversation special tokens, tool use, multiple choice) +# SFT (teach the model conversation special tokens, tool use, multiple choice) # download 2.3MB of synthetic identity conversations to impart a personality to nanochat # see dev/gen_synthetic_data.py for details on how this data was prepared and to get a sense of how you can easily tune it curl -L -o $NANOCHAT_BASE_DIR/identity_conversations.jsonl https://karpathy-public.s3.us-west-2.amazonaws.com/identity_conversations.jsonl -# run midtraining and eval the model -torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.mid_train -- --run=$WANDB_RUN -torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.chat_eval -- -i mid - -# ----------------------------------------------------------------------------- -# Supervised Finetuning (domain adaptation to each sequence all by itself per row) - -# train sft and re-eval right away (should see a small bump) +# run SFT and eval the model torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.chat_sft -- --run=$WANDB_RUN torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.chat_eval -- -i sft @@ -111,15 +96,6 @@ torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.chat_eval -- - # even better, chat with your model over a pretty WebUI ChatGPT style # python -m scripts.chat_web -# ----------------------------------------------------------------------------- -# Reinforcement Learning. Optional, and currently only on GSM8K -# (optional) - -# run reinforcement learning -# torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.chat_rl -- --run=$WANDB_RUN -# eval the RL model only on GSM8K -# torchrun --standalone --nproc_per_node=$NPROC_PER_NODE -m scripts.chat_eval -- -i rl -a GSM8K - # ----------------------------------------------------------------------------- # Generate the full report by putting together all the sections # report.md is the output and will be copied to current directory for convenience diff --git a/scripts/chat_cli.py b/scripts/chat_cli.py index b14843a..d35c435 100644 --- a/scripts/chat_cli.py +++ b/scripts/chat_cli.py @@ -2,7 +2,7 @@ New and upgraded chat mode because a lot of the code has changed since the last one. Intended to be run single GPU only atm: -python -m scripts.chat_cli -i mid +python -m scripts.chat_cli """ import argparse import torch diff --git a/scripts/chat_eval.py b/scripts/chat_eval.py index a558303..cae2f0f 100644 --- a/scripts/chat_eval.py +++ b/scripts/chat_eval.py @@ -4,8 +4,8 @@ All the generic code lives here, and all the evaluation-specific code lives in nanochat directory and is imported from here. Example runs: -python -m scripts.chat_eval -i mid -a ARC-Easy -torchrun --nproc_per_node=8 -m scripts.chat_eval -- -i mid -a ARC-Easy +python -m scripts.chat_eval -a ARC-Easy +torchrun --nproc_per_node=8 -m scripts.chat_eval -- -a ARC-Easy """ import argparse diff --git a/scripts/chat_sft.py b/scripts/chat_sft.py index c0471c4..91300b6 100644 --- a/scripts/chat_sft.py +++ b/scripts/chat_sft.py @@ -1,65 +1,63 @@ """ -Finetune a base model to be a chat model. -Run on one GPU e.g. for debugging: +Supervised fine-tuning (SFT) the model. +Run as: python -m scripts.chat_sft Or torchrun for training: -torchrun --standalone --nproc_per_node=8 -m scripts.chat_sft +torchrun --standalone --nproc_per_node=8 -m scripts.chat_sft -- --device-batch-size=16 """ import argparse import os os.environ["PYTORCH_ALLOC_CONF"] = "expandable_segments:True" - +import time import wandb import torch -import torch.distributed as dist from contextlib import nullcontext - -from nanochat.common import compute_init, compute_cleanup, get_base_dir, print0, DummyWandb, autodetect_device_type -from nanochat.checkpoint_manager import load_model +from nanochat.common import compute_init, compute_cleanup, print0, DummyWandb, get_base_dir, autodetect_device_type +from nanochat.tokenizer import get_token_bytes from nanochat.checkpoint_manager import save_checkpoint -from nanochat.engine import Engine -from scripts.chat_eval import run_chat_eval +from nanochat.loss_eval import evaluate_bpb +from nanochat.checkpoint_manager import load_model +import torch.distributed as dist from tasks.common import TaskMixture -from tasks.arc import ARC from tasks.gsm8k import GSM8K +from tasks.mmlu import MMLU from tasks.smoltalk import SmolTalk from tasks.customjson import CustomJSON from tasks.spellingbee import SimpleSpelling, SpellingBee # ----------------------------------------------------------------------------- # CLI arguments -parser = argparse.ArgumentParser(description="Supervised finetuning for chat") +parser = argparse.ArgumentParser(description="Supervised fine-tuning (SFT) the model") # Logging parser.add_argument("--run", type=str, default="dummy", help="wandb run name ('dummy' disables wandb logging)") # Runtime parser.add_argument("--device-type", type=str, default="", help="cuda|cpu|mps (empty = autodetect)") parser.add_argument("--dtype", type=str, default="bfloat16", help="float32|bfloat16") # Model loading -parser.add_argument("--source", type=str, default="mid", help="base|mid - which checkpoint to load from") parser.add_argument("--model-tag", type=str, default=None, help="model tag to load from") parser.add_argument("--model-step", type=int, default=None, help="model step to load from") # Training horizon -parser.add_argument("--num-epochs", type=int, default=1, help="number of epochs") -parser.add_argument("--num-iterations", type=int, default=-1, help="override number of iterations (-1 = use num_epochs)") +parser.add_argument("--num-iterations", type=int, default=-1, help="number of optimization steps (-1 = full epoch)") # Batch sizes -parser.add_argument("--device-batch-size", type=int, default=4, help="per-device batch size") -parser.add_argument("--target-examples-per-step", type=int, default=32, help="target examples per optimization step") +parser.add_argument("--max-seq-len", type=int, default=2048, help="max context length") +parser.add_argument("--device-batch-size", type=int, default=32, help="per-device batch size") +parser.add_argument("--total-batch-size", type=int, default=524288, help="total batch size in tokens") # Optimization parser.add_argument("--embedding-lr", type=float, default=0.2, help="learning rate for embedding parameters (Adam)") parser.add_argument("--unembedding-lr", type=float, default=0.004, help="learning rate for unembedding parameters (Adam)") parser.add_argument("--matrix-lr", type=float, default=0.02, help="learning rate for matrix parameters (Muon)") parser.add_argument("--weight-decay", type=float, default=0.0, help="weight decay for embedding/unembedding parameters (Adam)") -parser.add_argument("--init-lr-frac", type=float, default=0.02, help="initial LR as fraction of base LR") +parser.add_argument("--init-lr-frac", type=float, default=1.0, help="initial LR as fraction of base LR") # Evaluation -parser.add_argument("--eval-every", type=int, default=100, help="evaluate val loss every N steps") -parser.add_argument("--eval-steps", type=int, default=100, help="number of batches for val loss evaluation") -parser.add_argument("--eval-metrics-every", type=int, default=200, help="evaluate accuracy metrics every N steps") -parser.add_argument("--eval-metrics-max-problems", type=int, default=1024, help="max problems per metric evaluation") +parser.add_argument("--eval-every", type=int, default=150, help="evaluate val bpb every N steps (-1 = disable)") +parser.add_argument("--eval-tokens", type=int, default=20*524288, help="number of tokens to evaluate val loss on") +# Output +parser.add_argument("--dry-run", action="store_true", help="log to wandb but skip checkpoints/report") args = parser.parse_args() user_config = vars(args).copy() # ----------------------------------------------------------------------------- @@ -70,217 +68,320 @@ ddp, ddp_rank, ddp_local_rank, ddp_world_size, device = compute_init(device_type master_process = ddp_rank == 0 ptdtype = torch.float32 if args.dtype == 'float32' else torch.bfloat16 autocast_ctx = torch.amp.autocast(device_type=device_type, dtype=ptdtype) if device_type == "cuda" else nullcontext() +synchronize = torch.cuda.synchronize if device_type == "cuda" else lambda: None +get_max_memory = torch.cuda.max_memory_allocated if device_type == "cuda" else lambda: 0 # wandb logging init use_dummy_wandb = args.run == "dummy" or not master_process -wandb_run = DummyWandb() if use_dummy_wandb else wandb.init(project="nanochat-sft", name=args.run, config=user_config, save_code=True) +wandb_run = DummyWandb() if use_dummy_wandb else wandb.init(project="nanochat-sft", name=args.run, config=user_config) # Load the model and tokenizer -model, tokenizer, meta = load_model(args.source, device, phase="train", model_tag=args.model_tag, step=args.model_step) -orig_model = model # original, uncompiled model -# model = torch.compile(model, dynamic=True) # doesn't work super well because of variable lengths of inputs -engine = Engine(model, tokenizer) # will be used for inline model evaluation only +model, tokenizer, meta = load_model("base", device, phase="train", model_tag=args.model_tag, step=args.model_step) +pretrain_batch_size = meta.get("device_batch_size", None) +if pretrain_batch_size is not None and args.device_batch_size > pretrain_batch_size: + print0(f"FOOTGUN WARNING: base model training used device_batch_size {pretrain_batch_size}, did you pass in a good --device-batch-size to this script?") +orig_model = model +model = torch.compile(model, dynamic=False) +depth = model.config.n_layer +num_flops_per_token = model.estimate_flops() +tokens_per_fwdbwd = args.device_batch_size * args.max_seq_len # tokens per iteration for a single rank +world_tokens_per_fwdbwd = tokens_per_fwdbwd * ddp_world_size # total tokens per iteration for all ranks +assert args.total_batch_size % world_tokens_per_fwdbwd == 0 +grad_accum_steps = args.total_batch_size // world_tokens_per_fwdbwd +print0(f"Tokens / micro-batch / rank: {args.device_batch_size} x {args.max_seq_len} = {tokens_per_fwdbwd:,}") +print0(f"Tokens / micro-batch: {world_tokens_per_fwdbwd:,}") +print0(f"Total batch size {args.total_batch_size:,} => gradient accumulation steps: {grad_accum_steps}") +token_bytes = get_token_bytes(device=device) -# ----------------------------------------------------------------------------- -# Task data mixture we'll train on -identity_conversations_filepath = os.path.join(get_base_dir(), "identity_conversations.jsonl") -train_ds = TaskMixture([ - ARC(subset="ARC-Easy", split="train"), # 2.3K rows - ARC(subset="ARC-Challenge", split="train"), # 1.1K rows - GSM8K(subset="main", split="train"), # 8K rows - SmolTalk(split="train", stop=10_000), # 10K rows of smoltalk - CustomJSON(filepath=identity_conversations_filepath), # 1K rows of synthetic identity conversations - SimpleSpelling(size=300, split="train"), # 300 rows of Simple Spelling (e.g. spell the word 'apple') - SpellingBee(size=300, split="train"), # 300 rows of Spelling Bee (e.g. how many 'r' are in 'strawberry'?) -]) # 2.3K + 1.1K + 8K + 10K + 1K + 0.3K + 0.3K = 23K rows -val_ds = SmolTalk(split="test") # general conversations, 24K rows (though we don't actually use all of it) - -# ----------------------------------------------------------------------------- -# DataLoader - -def sft_data_generator(dataset, batch_size): - pad_token_id = tokenizer.encode_special("<|assistant_end|>") # use <|assistant_end|> as the pad token is ok, these positions are masked in the loss - # prepares a list of tokenized conversations into a batch and yields - def collate_and_yield(batch): - nrows = len(batch) - ncols = max(len(ids) for ids, mask in batch) - 1 # seq of n creates inputs/targets of n-1 - inputs = torch.full((nrows, ncols), pad_token_id, dtype=torch.long) - targets = torch.full((nrows, ncols), -1, dtype=torch.long) # -1 is ignore index - for i, (ids, mask) in enumerate(batch): - n = len(ids) - ids_tensor = torch.tensor(ids, dtype=torch.long) - inputs[i, :n-1] = ids_tensor[:-1] - # recall -1 is the ignore index, so mask out targets where mask is 0 - row_targets = ids_tensor[1:] - # mask[1:] omits the mask for the BOS token, which is never a target atm so it's ok - mask_tensor = torch.tensor(mask[1:], dtype=torch.long) - row_targets[mask_tensor == 0] = -1 # mask out targets where mask is 0 - targets[i, :n-1] = row_targets - inputs = inputs.to(device) # move to device - targets = targets.to(device) - return inputs, targets - # iterates over the dataset in epochs, tokenizes - batch = [] - while True: - for i in range(ddp_rank, len(dataset), ddp_world_size): - doc = dataset[i] - ids, mask = tokenizer.render_conversation(doc) - batch.append((ids, mask)) - if len(batch) == batch_size: - yield collate_and_yield(batch) - batch = [] - -examples_per_step = args.device_batch_size * ddp_world_size -print0(f"Target examples per step: {args.target_examples_per_step}") -print0(f"Device batch size: {args.device_batch_size}") -print0(f"Examples per step is device_batch_size * ddp_world_size: {examples_per_step}") -assert args.target_examples_per_step % examples_per_step == 0, "Target examples per step must be divisible by examples per step" -grad_accum_steps = args.target_examples_per_step // examples_per_step -print0(f"=> Setting grad accum steps: {grad_accum_steps}") - -if args.num_iterations == -1: - # derive num_iterations from num_epochs and the size of the dataset - assert args.num_epochs > 0, "num_epochs must be positive if num_iterations is -1" - num_iterations = (len(train_ds) // args.target_examples_per_step) * args.num_epochs -else: - num_iterations = args.num_iterations -train_loader = sft_data_generator(train_ds, batch_size=args.device_batch_size) -build_val_loader = lambda: sft_data_generator(val_ds, batch_size=args.device_batch_size) - -# ----------------------------------------------------------------------------- -# Initialize the Optimizer - -optimizer = model.setup_optimizer( - unembedding_lr=args.unembedding_lr, - embedding_lr=args.embedding_lr, - matrix_lr=args.matrix_lr, - weight_decay=args.weight_decay, -) -# Set the initial learning rate as a fraction of the base learning rate +# Initialize the Optimizer (combined MuonAdamW: Muon for matrix params, AdamW for rest) +optimizer = model.setup_optimizer(unembedding_lr=args.unembedding_lr, embedding_lr=args.embedding_lr, matrix_lr=args.matrix_lr, weight_decay=args.weight_decay) +# Override the initial learning rate as a fraction of the base learning rate for group in optimizer.param_groups: group["lr"] = group["lr"] * args.init_lr_frac group["initial_lr"] = group["lr"] -# ----------------------------------------------------------------------------- -# Training loop +# SFT data mixture and DataLoader +base_dir = get_base_dir() +identity_conversations_filepath = os.path.join(base_dir, "identity_conversations.jsonl") +train_dataset = TaskMixture([ + SmolTalk(split="train"), # 460K rows of general conversations + MMLU(subset="auxiliary_train", split="train"), # 100K rows of multiple choice problems drawn from ARC, MC_TEST, OBQA, RACE + GSM8K(subset="main", split="train"), # 8K rows teaching simple math and (calculator) tool use + GSM8K(subset="main", split="train"), # 2 epochs of GSM8K + CustomJSON(filepath=identity_conversations_filepath), # 1000 rows of synthetic identity conversations + CustomJSON(filepath=identity_conversations_filepath), # let's do 2 epochs of these + SimpleSpelling(size=200000, split="train"), # 200K rows of Simple Spelling (e.g. spell the word 'apple') + SpellingBee(size=80000, split="train"), # 80K rows of Spelling Bee (e.g. how many 'r' are in 'strawberry'?) +]) # total: 460K + 100K + 16K + 200K + 80K = 856K rows +val_dataset = TaskMixture([ + SmolTalk(split="test"), # 24K rows in test set + MMLU(subset="all", split="test", stop=5200), # 14K rows in test set, use only 5.2K to match the train ratios + GSM8K(subset="main", split="test", stop=420), # 1.32K rows in test set, use only 420 to match the train ratios +]) # total: 24K + 14K + 1.32K ~= 39K rows +# DataLoader is defined here, it emits inputs, targets : 2D tensors of shape (device_batch_size, max_seq_len) +# A big problem is that we don't know the final num_iterations in advance. So we create +# these two global variables and update them from within the data generator. +last_step = False # we will toggle this to True when we reach the end of the training dataset +approx_progress = 0.0 # will go from 0 to 1 over the course of the epoch +current_epoch = 1 # track epoch for logging +def sft_data_generator_bos_bestfit(split, buffer_size=100): + """ + BOS-aligned dataloader for SFT with bestfit-pad packing. + + Each row in the batch starts with BOS (beginning of a conversation). + Conversations are packed using best-fit algorithm. When no conversation fits, + the row is padded (instead of cropping) to ensure no tokens are ever discarded. + Padding positions have targets masked with -1 (ignore_index for cross-entropy). + """ + global last_step, approx_progress, current_epoch + assert split in {"train", "val"}, "split must be 'train' or 'val'" + dataset = train_dataset if split == "train" else val_dataset + dataset_size = len(dataset) + assert dataset_size > 0 + row_capacity = args.max_seq_len + 1 # +1 for target at last position + bos_token = tokenizer.get_bos_token_id() + + # Conversation buffer: list of token lists + conv_buffer = [] + cursor = ddp_rank # Each rank processes different conversations (for fetching) + consumed = ddp_rank # Track actual consumption separately from buffering + epoch = 1 + it = 0 # iteration counter + + def refill_buffer(): + nonlocal cursor, epoch + while len(conv_buffer) < buffer_size: + conversation = dataset[cursor] + ids, _ = tokenizer.render_conversation(conversation) + conv_buffer.append(ids) + cursor += ddp_world_size + if cursor >= dataset_size: + cursor = cursor % dataset_size + epoch += 1 + # Note: last_step is now triggered based on consumption, not fetching + + while True: + rows = [] + row_lengths = [] # Track actual content length (excluding padding) for each row + for _ in range(args.device_batch_size): + row = [] + padded = False + while len(row) < row_capacity: + # Ensure buffer has conversations + while len(conv_buffer) < buffer_size: + refill_buffer() + + remaining = row_capacity - len(row) + + # Find largest conversation that fits entirely + best_idx = -1 + best_len = 0 + for i, conv in enumerate(conv_buffer): + conv_len = len(conv) + if conv_len <= remaining and conv_len > best_len: + best_idx = i + best_len = conv_len + + if best_idx >= 0: + # Found a conversation that fits - use it entirely + conv = conv_buffer.pop(best_idx) + row.extend(conv) + consumed += ddp_world_size # Track actual consumption + else: + # No conversation fits - pad the remainder instead of cropping + # This ensures we never discard any tokens + content_len = len(row) + row.extend([bos_token] * remaining) # Pad with BOS tokens + padded = True + break # Row is now full (with padding) + + # Track content length: full row if no padding, otherwise the length before padding + if padded: + row_lengths.append(content_len) + else: + row_lengths.append(row_capacity) + rows.append(row[:row_capacity]) + + # Stopping condition to respect num_iterations, if given + it += 1 + if 0 < args.num_iterations <= it and split == "train": + last_step = True + + # Update progress tracking (based on consumed, not cursor, to account for buffering) + if split == "train": + current_epoch = epoch + if args.num_iterations > 0: + approx_progress = it / args.num_iterations + else: + approx_progress = consumed / dataset_size + # Trigger last_step when we've consumed enough (instead of when cursor wraps) + if consumed >= dataset_size: + last_step = True + + # Build tensors + use_cuda = device_type == "cuda" + batch_tensor = torch.tensor(rows, dtype=torch.long, pin_memory=use_cuda) + inputs = batch_tensor[:, :-1].to(device=device, dtype=torch.int32, non_blocking=use_cuda) + targets = batch_tensor[:, 1:].to(device=device, dtype=torch.int64, non_blocking=use_cuda) + + # Mask out padding positions in targets (set to -1 = ignore_index) + # For each row, positions >= (content_length - 1) in targets should be masked + for i, content_len in enumerate(row_lengths): + if content_len < row_capacity: + targets[i, content_len-1:] = -1 + + yield inputs, targets + +train_loader = sft_data_generator_bos_bestfit("train") +build_val_loader = lambda: sft_data_generator_bos_bestfit("val") +progress = 0 # will go from 0 to 1 over the course of the epoch # Learning rate scheduler -def get_lr_multiplier(it): - lrm = 1.0 - it / num_iterations - return lrm +def get_lr_multiplier(progress): + # first 80% of training: no decay, then linearly ramp down to 0. + return 1 if progress < 0.8 else 1 - (progress - 0.8) / 0.2 -# Go! +# Momentum scheduler for Muon optimizer +def get_muon_momentum(it): + frac = min(it / 300, 1) + momentum = (1 - frac) * 0.85 + frac * 0.95 + return momentum + +# ----------------------------------------------------------------------------- +# Training loop +x, y = next(train_loader) # prefetch the very first batch of data +min_val_bpb = float("inf") +smooth_train_loss = 0 # EMA of training loss +ema_beta = 0.9 # EMA decay factor +total_training_time = 0 # total wall-clock time of training step = 0 -for step in range(num_iterations): - last_step = step == num_iterations - 1 +while True: + flops_so_far = num_flops_per_token * args.total_batch_size * step - # evaluate the validation loss - if last_step or step % args.eval_every == 0: + # Synchronize last_step across all ranks to avoid hangs in the distributed setting + if ddp: + last_step_tensor = torch.tensor(last_step, dtype=torch.int32, device=device) + dist.all_reduce(last_step_tensor, op=dist.ReduceOp.MAX) + last_step = bool(last_step_tensor.item()) + + # once in a while: evaluate the val bpb (all ranks participate) + if last_step or (args.eval_every > 0 and step % args.eval_every == 0): model.eval() val_loader = build_val_loader() - losses = [] - for _ in range(args.eval_steps): - val_inputs, val_targets = next(val_loader) - with torch.no_grad(), autocast_ctx: - loss = model(val_inputs, val_targets) - losses.append(loss) - val_loss = torch.stack(losses).mean() # average over eval_steps - if ddp: - dist.all_reduce(val_loss, op=dist.ReduceOp.AVG) # average over ranks - val_loss = val_loss.item() - print0(f"Step {step:05d} | Validation loss: {val_loss:.6f}") + eval_steps = args.eval_tokens // (args.device_batch_size * args.max_seq_len * ddp_world_size) + with autocast_ctx: + val_bpb = evaluate_bpb(model, val_loader, eval_steps, token_bytes) + print0(f"Step {step:05d} | Validation bpb: {val_bpb:.4f}") + if val_bpb < min_val_bpb: + min_val_bpb = val_bpb wandb_run.log({ "step": step, - "val_loss": val_loss, + "total_training_flops": flops_so_far, + "total_training_time": total_training_time, + "val/bpb": val_bpb, }) model.train() - # evaluate accuracy of the multiple choice tasks (which are quick to run) - if last_step or (step > 0 and step % args.eval_metrics_every == 0): - model.eval() - metrics = {} - with torch.no_grad(), autocast_ctx: - # note that because these are inside no_grad, we can usually afford to at least ~2X the batch size - metrics["mmlu_acc"] = run_chat_eval("MMLU", model, tokenizer, engine, batch_size=args.device_batch_size*2, max_problems=args.eval_metrics_max_problems) - metrics["arc_easy_acc"] = run_chat_eval("ARC-Easy", model, tokenizer, engine, batch_size=args.device_batch_size*2, max_problems=args.eval_metrics_max_problems) - metrics_str = ', '.join(f'{k}: {v:.6f}' for k, v in metrics.items()) - print0(f"Step {step:05d} | {metrics_str}") - wandb_run.log({ - "step": step, - **metrics, - }) - model.train() + # save checkpoint at the end of the run (only on master process) + if master_process and last_step and not args.dry_run: + output_dirname = args.model_tag if args.model_tag else f"d{depth}" # e.g. d12 + checkpoint_dir = os.path.join(base_dir, "sft_checkpoints", output_dirname) + save_checkpoint( + checkpoint_dir, + step, + orig_model.state_dict(), + optimizer.state_dict(), + { + "step": step, + "val_bpb": val_bpb, # loss at last step + "model_config": { + "sequence_len": args.max_seq_len, + "vocab_size": tokenizer.get_vocab_size(), + "n_layer": depth, + "n_head": model.config.n_head, + "n_kv_head": model.config.n_kv_head, + "n_embd": model.config.n_embd, + }, + "user_config": user_config, # inputs to the training script + } + ) if last_step: break + # ------------------------------------------------------------------------- + # single training step # evaluate the gradient - num_tokens = torch.tensor(0, device=device) # the number of "active" tokens of supervision seen + synchronize() + t0 = time.time() for micro_step in range(grad_accum_steps): - train_inputs, train_targets = next(train_loader) with autocast_ctx: - loss = model(train_inputs, train_targets) + loss = model(x, y) train_loss = loss.detach() # for logging loss = loss / grad_accum_steps # each .backward() is a grad sum => normalize loss here - loss.backward() # accumulate the gradient - num_tokens += (train_targets >= 0).sum() - if ddp: - dist.all_reduce(num_tokens, op=dist.ReduceOp.SUM) # sum over ranks - - # learning rate scheduler - lrm = get_lr_multiplier(step) + loss.backward() + x, y = next(train_loader) # prefetch the next batch while the GPU is busy with forward/backward + progress = max(progress, approx_progress) # only increase progress monotonically + # step the optimizer + lrm = get_lr_multiplier(progress) + muon_momentum = get_muon_momentum(step) for group in optimizer.param_groups: group["lr"] = group["initial_lr"] * lrm - - # step the optimizer + if group['kind'] == 'muon': + group["momentum"] = muon_momentum optimizer.step() model.zero_grad(set_to_none=True) + synchronize() + t1 = time.time() + dt = t1 - t0 + # ------------------------------------------------------------------------- - # logging - train_loss_item = train_loss.item() - num_tokens_item = num_tokens.item() - print0(f"Step {step:05d}/{num_iterations:05d} | Training loss: {train_loss_item:.6f}| lrm: {lrm:.6f}| num_tokens: {num_tokens_item:,}") - wandb_run.log({ - "step": step, - "lrm": lrm, - "train_loss": train_loss_item, - "num_tokens": num_tokens_item, - }) + # State step += 1 -# Save the model at the end of the run -if master_process: - base_dir = get_base_dir() - depth = model.config.n_layer - output_dirname = args.model_tag if args.model_tag else f"d{depth}" # e.g. d12 - checkpoint_dir = os.path.join(base_dir, "chatsft_checkpoints", output_dirname) - model_config_kwargs = model.config.__dict__ # slightly naughty, abusing the simplicity of GPTConfig, TODO nicer - save_checkpoint( - checkpoint_dir, - step, - model.state_dict(), - None, # note: we don't bother to save the optimizer state - { + # logging + smooth_train_loss = ema_beta * smooth_train_loss + (1 - ema_beta) * train_loss.item() # EMA the training loss + debiased_smooth_loss = smooth_train_loss / (1 - ema_beta**(step + 1)) # debias the EMA + pct_done = 100 * progress + tok_per_sec = int(args.total_batch_size / dt) + flops_per_sec = num_flops_per_token * args.total_batch_size / dt + promised_flops_per_sec_h100 = 989e12 * ddp_world_size # bfloat16 H100 SXM and without 2:4 sparsity + mfu = 100 * flops_per_sec / promised_flops_per_sec_h100 # in % + if step > 10: + total_training_time += dt # only count the time after the first 10 steps + print0(f"step {step:05d} ({pct_done:.2f}%) | loss: {debiased_smooth_loss:.6f} | lrm: {lrm:.2f} | dt: {dt * 1000:.2f}ms | tok/sec: {tok_per_sec:,} | mfu: {mfu:.2f} | epoch: {current_epoch} | total time: {total_training_time/60:.2f}m") + if step % 10 == 0: + wandb_run.log({ "step": step, - "val_loss": val_loss, - **metrics, - "model_config": model_config_kwargs, - } - ) - print(f"✅ Saved model checkpoint to {checkpoint_dir}") + "total_training_flops": flops_so_far, + "total_training_time": total_training_time, + "train/loss": debiased_smooth_loss, + "train/lrm": lrm, + "train/dt": dt, + "train/tok_per_sec": tok_per_sec, + "train/mfu": mfu, + "train/epoch": current_epoch, + }) + +# print a few more stats +print0(f"Peak memory usage: {get_max_memory() / 1024 / 1024:.2f}MiB") +print0(f"Total training time: {total_training_time/60:.2f}m") +print0(f"Minimum validation bpb: {min_val_bpb:.4f}") # Log to report -from nanochat.report import get_report -get_report().log(section="Chat SFT", data=[ - user_config, # CLI args - { - "Training rows": len(train_ds), - "Number of iterations": num_iterations, - "Training loss": train_loss_item, - "Validation loss": val_loss, - }, -]) +if not args.dry_run: + from nanochat.report import get_report + get_report().log(section="SFT", data=[ + user_config, # CLI args + { # stats about the training setup + "Number of iterations": step, + "DDP world size": ddp_world_size, + }, + { # stats about training outcomes + "Minimum validation bpb": min_val_bpb, + } + ]) -# Cleanup -wandb_run.finish() +# cleanup +wandb_run.finish() # wandb run finish compute_cleanup() diff --git a/scripts/mid_train.py b/scripts/mid_train.py deleted file mode 100644 index 54c5fb0..0000000 --- a/scripts/mid_train.py +++ /dev/null @@ -1,386 +0,0 @@ -""" -Midtrain the model. Same as pretraining but simpler. -Run as: - -python -m scripts.mid_train - -Or torchrun for training: - -torchrun --standalone --nproc_per_node=8 -m scripts.mid_train -- --device-batch-size=16 -""" - -import argparse -import os -os.environ["PYTORCH_ALLOC_CONF"] = "expandable_segments:True" -import time -import wandb -import torch -from contextlib import nullcontext -from nanochat.common import compute_init, compute_cleanup, print0, DummyWandb, get_base_dir, autodetect_device_type -from nanochat.tokenizer import get_token_bytes -from nanochat.checkpoint_manager import save_checkpoint -from nanochat.loss_eval import evaluate_bpb -from nanochat.checkpoint_manager import load_model -import torch.distributed as dist - -from tasks.common import TaskMixture -from tasks.gsm8k import GSM8K -from tasks.mmlu import MMLU -from tasks.smoltalk import SmolTalk -from tasks.customjson import CustomJSON -from tasks.spellingbee import SimpleSpelling, SpellingBee - -# ----------------------------------------------------------------------------- -# CLI arguments -parser = argparse.ArgumentParser(description="Midtrain the model") -# Logging -parser.add_argument("--run", type=str, default="dummy", help="wandb run name ('dummy' disables wandb logging)") -# Runtime -parser.add_argument("--device-type", type=str, default="", help="cuda|cpu|mps (empty = autodetect)") -parser.add_argument("--dtype", type=str, default="bfloat16", help="float32|bfloat16") -# Model loading -parser.add_argument("--model-tag", type=str, default=None, help="model tag to load from") -parser.add_argument("--model-step", type=int, default=None, help="model step to load from") -# Training horizon -parser.add_argument("--num-iterations", type=int, default=-1, help="number of optimization steps (-1 = full epoch)") -# Batch sizes -parser.add_argument("--max-seq-len", type=int, default=2048, help="max context length") -parser.add_argument("--device-batch-size", type=int, default=32, help="per-device batch size") -parser.add_argument("--total-batch-size", type=int, default=524288, help="total batch size in tokens") -# Optimization -parser.add_argument("--embedding-lr", type=float, default=0.2, help="learning rate for embedding parameters (Adam)") -parser.add_argument("--unembedding-lr", type=float, default=0.004, help="learning rate for unembedding parameters (Adam)") -parser.add_argument("--matrix-lr", type=float, default=0.02, help="learning rate for matrix parameters (Muon)") -parser.add_argument("--weight-decay", type=float, default=0.0, help="weight decay for embedding/unembedding parameters (Adam)") -parser.add_argument("--init-lr-frac", type=float, default=1.0, help="initial LR as fraction of base LR") -# Evaluation -parser.add_argument("--eval-every", type=int, default=150, help="evaluate val bpb every N steps (-1 = disable)") -parser.add_argument("--eval-tokens", type=int, default=20*524288, help="number of tokens to evaluate val loss on") -# Output -parser.add_argument("--dry-run", action="store_true", help="log to wandb but skip checkpoints/report") -args = parser.parse_args() -user_config = vars(args).copy() -# ----------------------------------------------------------------------------- - -# Compute init -device_type = autodetect_device_type() if args.device_type == "" else args.device_type -ddp, ddp_rank, ddp_local_rank, ddp_world_size, device = compute_init(device_type) -master_process = ddp_rank == 0 -ptdtype = torch.float32 if args.dtype == 'float32' else torch.bfloat16 -autocast_ctx = torch.amp.autocast(device_type=device_type, dtype=ptdtype) if device_type == "cuda" else nullcontext() -synchronize = torch.cuda.synchronize if device_type == "cuda" else lambda: None -get_max_memory = torch.cuda.max_memory_allocated if device_type == "cuda" else lambda: 0 - -# wandb logging init -use_dummy_wandb = args.run == "dummy" or not master_process -wandb_run = DummyWandb() if use_dummy_wandb else wandb.init(project="nanochat-mid", name=args.run, config=user_config) - -# Load the model and tokenizer -model, tokenizer, meta = load_model("base", device, phase="train", model_tag=args.model_tag, step=args.model_step) -pretrain_batch_size = meta.get("device_batch_size", None) -if pretrain_batch_size is not None and args.device_batch_size > pretrain_batch_size: - print0(f"FOOTGUN WARNING: base model training used device_batch_size {pretrain_batch_size}, did you pass in a good --device-batch-size to this script?") -orig_model = model -model = torch.compile(model, dynamic=False) -depth = model.config.n_layer -num_flops_per_token = model.estimate_flops() -tokens_per_fwdbwd = args.device_batch_size * args.max_seq_len # tokens per iteration for a single rank -world_tokens_per_fwdbwd = tokens_per_fwdbwd * ddp_world_size # total tokens per iteration for all ranks -assert args.total_batch_size % world_tokens_per_fwdbwd == 0 -grad_accum_steps = args.total_batch_size // world_tokens_per_fwdbwd -print0(f"Tokens / micro-batch / rank: {args.device_batch_size} x {args.max_seq_len} = {tokens_per_fwdbwd:,}") -print0(f"Tokens / micro-batch: {world_tokens_per_fwdbwd:,}") -print0(f"Total batch size {args.total_batch_size:,} => gradient accumulation steps: {grad_accum_steps}") -token_bytes = get_token_bytes(device=device) - -# Initialize the Optimizer (combined MuonAdamW: Muon for matrix params, AdamW for rest) -optimizer = model.setup_optimizer(unembedding_lr=args.unembedding_lr, embedding_lr=args.embedding_lr, matrix_lr=args.matrix_lr, weight_decay=args.weight_decay) -# Override the initial learning rate as a fraction of the base learning rate -for group in optimizer.param_groups: - group["lr"] = group["lr"] * args.init_lr_frac - group["initial_lr"] = group["lr"] - -# Midtraining data mixture and DataLoader -base_dir = get_base_dir() -identity_conversations_filepath = os.path.join(base_dir, "identity_conversations.jsonl") -train_dataset = TaskMixture([ - SmolTalk(split="train"), # 460K rows of general conversations - MMLU(subset="auxiliary_train", split="train"), # 100K rows of multiple choice problems drawn from ARC, MC_TEST, OBQA, RACE - GSM8K(subset="main", split="train"), # 8K rows teaching simple math and (calculator) tool use - CustomJSON(filepath=identity_conversations_filepath), # 1000 rows of synthetic identity conversations - CustomJSON(filepath=identity_conversations_filepath), # let's do 2 epochs of these - SimpleSpelling(size=200000, split="train"), # 200K rows of Simple Spelling (e.g. spell the word 'apple') - SpellingBee(size=80000, split="train"), # 80K rows of Spelling Bee (e.g. how many 'r' are in 'strawberry'?) -]) # total: 460K + 100K + 8K + 200K + 80K = 848K rows -val_dataset = TaskMixture([ - SmolTalk(split="test"), # 24K rows in test set - MMLU(subset="all", split="test", stop=5200), # 14K rows in test set, use only 5.2K to match the train ratios - GSM8K(subset="main", split="test", stop=420), # 1.32K rows in test set, use only 420 to match the train ratios -]) # total: 24K + 14K + 1.32K ~= 39K rows -# DataLoader is defined here, it emits inputs, targets : 2D tensors of shape (device_batch_size, max_seq_len) -# A big problem is that we don't know the final num_iterations in advance. So we create -# these two global variables and update them from within the data generator. -last_step = False # we will toggle this to True when we reach the end of the training dataset -approx_progress = 0.0 # will go from 0 to 1 over the course of the epoch -current_epoch = 1 # track epoch for logging -def mid_data_generator_bos_bestfit(split, buffer_size=100): - """ - BOS-aligned dataloader for midtraining with bestfit-pad packing. - - Each row in the batch starts with BOS (beginning of a conversation). - Conversations are packed using best-fit algorithm. When no conversation fits, - the row is padded (instead of cropping) to ensure no tokens are ever discarded. - Padding positions have targets masked with -1 (ignore_index for cross-entropy). - """ - global last_step, approx_progress, current_epoch - assert split in {"train", "val"}, "split must be 'train' or 'val'" - dataset = train_dataset if split == "train" else val_dataset - dataset_size = len(dataset) - assert dataset_size > 0 - row_capacity = args.max_seq_len + 1 # +1 for target at last position - bos_token = tokenizer.get_bos_token_id() - - # Conversation buffer: list of token lists - conv_buffer = [] - cursor = ddp_rank # Each rank processes different conversations (for fetching) - consumed = ddp_rank # Track actual consumption separately from buffering - epoch = 1 - it = 0 # iteration counter - - def refill_buffer(): - nonlocal cursor, epoch - while len(conv_buffer) < buffer_size: - conversation = dataset[cursor] - ids, _ = tokenizer.render_conversation(conversation) - conv_buffer.append(ids) - cursor += ddp_world_size - if cursor >= dataset_size: - cursor = cursor % dataset_size - epoch += 1 - # Note: last_step is now triggered based on consumption, not fetching - - while True: - rows = [] - row_lengths = [] # Track actual content length (excluding padding) for each row - for _ in range(args.device_batch_size): - row = [] - padded = False - while len(row) < row_capacity: - # Ensure buffer has conversations - while len(conv_buffer) < buffer_size: - refill_buffer() - - remaining = row_capacity - len(row) - - # Find largest conversation that fits entirely - best_idx = -1 - best_len = 0 - for i, conv in enumerate(conv_buffer): - conv_len = len(conv) - if conv_len <= remaining and conv_len > best_len: - best_idx = i - best_len = conv_len - - if best_idx >= 0: - # Found a conversation that fits - use it entirely - conv = conv_buffer.pop(best_idx) - row.extend(conv) - consumed += ddp_world_size # Track actual consumption - else: - # No conversation fits - pad the remainder instead of cropping - # This ensures we never discard any tokens - content_len = len(row) - row.extend([bos_token] * remaining) # Pad with BOS tokens - padded = True - break # Row is now full (with padding) - - # Track content length: full row if no padding, otherwise the length before padding - if padded: - row_lengths.append(content_len) - else: - row_lengths.append(row_capacity) - rows.append(row[:row_capacity]) - - # Stopping condition to respect num_iterations, if given - it += 1 - if 0 < args.num_iterations <= it and split == "train": - last_step = True - - # Update progress tracking (based on consumed, not cursor, to account for buffering) - if split == "train": - current_epoch = epoch - if args.num_iterations > 0: - approx_progress = it / args.num_iterations - else: - approx_progress = consumed / dataset_size - # Trigger last_step when we've consumed enough (instead of when cursor wraps) - if consumed >= dataset_size: - last_step = True - - # Build tensors - use_cuda = device_type == "cuda" - batch_tensor = torch.tensor(rows, dtype=torch.long, pin_memory=use_cuda) - inputs = batch_tensor[:, :-1].to(device=device, dtype=torch.int32, non_blocking=use_cuda) - targets = batch_tensor[:, 1:].to(device=device, dtype=torch.int64, non_blocking=use_cuda) - - # Mask out padding positions in targets (set to -1 = ignore_index) - # For each row, positions >= (content_length - 1) in targets should be masked - for i, content_len in enumerate(row_lengths): - if content_len < row_capacity: - targets[i, content_len-1:] = -1 - - yield inputs, targets - -train_loader = mid_data_generator_bos_bestfit("train") -build_val_loader = lambda: mid_data_generator_bos_bestfit("val") -progress = 0 # will go from 0 to 1 over the course of the epoch - -# Learning rate scheduler -def get_lr_multiplier(progress): - # first 80% of training: no decay, then linearly ramp down to 0. - return 1 if progress < 0.8 else 1 - (progress - 0.8) / 0.2 - -# Momentum scheduler for Muon optimizer -def get_muon_momentum(it): - frac = min(it / 300, 1) - momentum = (1 - frac) * 0.85 + frac * 0.95 - return momentum - -# ----------------------------------------------------------------------------- -# Training loop -x, y = next(train_loader) # prefetch the very first batch of data -min_val_bpb = float("inf") -smooth_train_loss = 0 # EMA of training loss -ema_beta = 0.9 # EMA decay factor -total_training_time = 0 # total wall-clock time of training -step = 0 -while True: - flops_so_far = num_flops_per_token * args.total_batch_size * step - - # Synchronize last_step across all ranks to avoid hangs in the distributed setting - if ddp: - last_step_tensor = torch.tensor(last_step, dtype=torch.int32, device=device) - dist.all_reduce(last_step_tensor, op=dist.ReduceOp.MAX) - last_step = bool(last_step_tensor.item()) - - # once in a while: evaluate the val bpb (all ranks participate) - if last_step or (args.eval_every > 0 and step % args.eval_every == 0): - model.eval() - val_loader = build_val_loader() - eval_steps = args.eval_tokens // (args.device_batch_size * args.max_seq_len * ddp_world_size) - with autocast_ctx: - val_bpb = evaluate_bpb(model, val_loader, eval_steps, token_bytes) - print0(f"Step {step:05d} | Validation bpb: {val_bpb:.4f}") - if val_bpb < min_val_bpb: - min_val_bpb = val_bpb - wandb_run.log({ - "step": step, - "total_training_flops": flops_so_far, - "total_training_time": total_training_time, - "val/bpb": val_bpb, - }) - model.train() - - # save checkpoint at the end of the run (only on master process) - if master_process and last_step and not args.dry_run: - output_dirname = args.model_tag if args.model_tag else f"d{depth}" # e.g. d12 - checkpoint_dir = os.path.join(base_dir, "mid_checkpoints", output_dirname) - save_checkpoint( - checkpoint_dir, - step, - orig_model.state_dict(), - optimizer.state_dict(), - { - "step": step, - "val_bpb": val_bpb, # loss at last step - "model_config": { - "sequence_len": args.max_seq_len, - "vocab_size": tokenizer.get_vocab_size(), - "n_layer": depth, - "n_head": model.config.n_head, - "n_kv_head": model.config.n_kv_head, - "n_embd": model.config.n_embd, - }, - "user_config": user_config, # inputs to the training script - } - ) - - if last_step: - break - - # ------------------------------------------------------------------------- - # single training step - # evaluate the gradient - synchronize() - t0 = time.time() - for micro_step in range(grad_accum_steps): - with autocast_ctx: - loss = model(x, y) - train_loss = loss.detach() # for logging - loss = loss / grad_accum_steps # each .backward() is a grad sum => normalize loss here - loss.backward() - x, y = next(train_loader) # prefetch the next batch while the GPU is busy with forward/backward - progress = max(progress, approx_progress) # only increase progress monotonically - # step the optimizer - lrm = get_lr_multiplier(progress) - muon_momentum = get_muon_momentum(step) - for group in optimizer.param_groups: - group["lr"] = group["initial_lr"] * lrm - if group['kind'] == 'muon': - group["momentum"] = muon_momentum - optimizer.step() - model.zero_grad(set_to_none=True) - synchronize() - t1 = time.time() - dt = t1 - t0 - # ------------------------------------------------------------------------- - - # State - step += 1 - - # logging - smooth_train_loss = ema_beta * smooth_train_loss + (1 - ema_beta) * train_loss.item() # EMA the training loss - debiased_smooth_loss = smooth_train_loss / (1 - ema_beta**(step + 1)) # debias the EMA - pct_done = 100 * progress - tok_per_sec = int(args.total_batch_size / dt) - flops_per_sec = num_flops_per_token * args.total_batch_size / dt - promised_flops_per_sec_h100 = 989e12 * ddp_world_size # bfloat16 H100 SXM and without 2:4 sparsity - mfu = 100 * flops_per_sec / promised_flops_per_sec_h100 # in % - if step > 10: - total_training_time += dt # only count the time after the first 10 steps - print0(f"step {step:05d} ({pct_done:.2f}%) | loss: {debiased_smooth_loss:.6f} | lrm: {lrm:.2f} | dt: {dt * 1000:.2f}ms | tok/sec: {tok_per_sec:,} | mfu: {mfu:.2f} | epoch: {current_epoch} | total time: {total_training_time/60:.2f}m") - if step % 10 == 0: - wandb_run.log({ - "step": step, - "total_training_flops": flops_so_far, - "total_training_time": total_training_time, - "train/loss": debiased_smooth_loss, - "train/lrm": lrm, - "train/dt": dt, - "train/tok_per_sec": tok_per_sec, - "train/mfu": mfu, - "train/epoch": current_epoch, - }) - -# print a few more stats -print0(f"Peak memory usage: {get_max_memory() / 1024 / 1024:.2f}MiB") -print0(f"Total training time: {total_training_time/60:.2f}m") -print0(f"Minimum validation bpb: {min_val_bpb:.4f}") - -# Log to report -if not args.dry_run: - from nanochat.report import get_report - get_report().log(section="Midtraining", data=[ - user_config, # CLI args - { # stats about the training setup - "Number of iterations": step, - "DDP world size": ddp_world_size, - }, - { # stats about training outcomes - "Minimum validation bpb": min_val_bpb, - } - ]) - -# cleanup -wandb_run.finish() # wandb run finish -compute_cleanup()