<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>NotionNext BLOG</title>
        <link>https://shuoli199909.com/</link>
        <description>这是一个由NotionNext生成的站点</description>
        <lastBuildDate>Thu, 02 Nov 2023 02:48:15 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>zh-CN</language>
        <copyright>All rights reserved 2023, Shuo Li</copyright>
        <item>
            <title><![CDATA[Simulation of Cochlea-Implants]]></title>
            <link>https://shuoli199909.com/article/eth-cs-ex1</link>
            <guid>https://shuoli199909.com/article/eth-cs-ex1</guid>
            <pubDate>Fri, 24 Mar 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Simulation of Sensory Systems (Exercise 1: The Auditory System).]]></description>
            <content:encoded><![CDATA[<div id="container" class="mx-auto undefined"><main class="notion light-mode notion-page notion-block-7e56cfae9e4946e5924e29da4b546367"><div class="notion-viewport"></div><div class="notion-collection-page-properties"></div><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-02a6f6f87e934019a05079a54cfda2d0" data-id="02a6f6f87e934019a05079a54cfda2d0"><span><div id="02a6f6f87e934019a05079a54cfda2d0" class="notion-header-anchor"></div><a class="notion-hash-link" href="#02a6f6f87e934019a05079a54cfda2d0" title="Background"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Background</span></span></h2><div class="notion-text notion-block-224c2f28a51f4ebfa6e97899568374d8">The mechanics of our ear convert sound into vibrations of the basilar membrane. The way this works:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-990179f0d07f41959184d323da9672b3"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:467px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F1983434f-e6a0-46f6-8395-2a40fc1acadc%2Fex1_1.gif?table=block&amp;id=990179f0-d07f-4195-9184-d323da9672b3" alt="Figure 1:The basilar membrane, located in the cochlear is small and tight at one end, leading to resonances at high frequencies." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 1:</b>The basilar membrane, located in the cochlear is small and tight at one end, leading to resonances at high frequencies.</figcaption></div></figure><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-4ccd71983cc14dc79ba8cda6128e795a"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:467px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F86b2dc9c-d21c-4697-b0ac-9bfa91a6f65e%2Fex1_2.gif?table=block&amp;id=4ccd7198-3cc1-4dc7-9ba8-cda6128e795a" alt="Figure 2: And it is wide and loose at the other end, leading to resonances at low frequencies." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 2:</b> And it is wide and loose at the other end, leading to resonances at low frequencies.</figcaption></div></figure><div class="notion-text notion-block-756ccbf8e7664bbab40f3dd25acced4b">This means that our ear automatically performs approximately a Fourier Transformation on the incoming sound, separating the high frequency compents from the low frequency components. This feature is used in the extremely successful design of cochlea implants (CIs): There sound is separated into approximately 20 frequency bins, and 20 electrodes stimulate the corresponding auditory nerve cells:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-dd41aa08c6414469bf18e754c32ebba8"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:384px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F30418d06-aaec-467d-a974-caa9e885f3af%2Fex1_3.jpg?table=block&amp;id=dd41aa08-c641-4469-bf18-e754c32ebba8" alt="Figure 3: Artist&#x27;s drawing of an inserted cochlear implant." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 3:</b> Artist&#x27;s drawing of an inserted cochlear implant.</figcaption></div></figure><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-5dd1b566ac234224bc1ecb8598193ded" data-id="5dd1b566ac234224bc1ecb8598193ded"><span><div id="5dd1b566ac234224bc1ecb8598193ded" class="notion-header-anchor"></div><a class="notion-hash-link" href="#5dd1b566ac234224bc1ecb8598193ded" title="Limitation of CI-Electrodes"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title"><b>Limitation of CI-Electrodes</b></span></span></h3><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-2b193e8efb8849c58c302a449eb05e35"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:480px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fa9eb5f12-c5ed-4f86-a761-7a12e962d9a0%2FUntitled.png?table=block&amp;id=2b193e8e-fb88-49c5-8c30-2a449eb05e35" alt="Figure 4: The number of electrodes for CIs is limited mainly by (1) current spread (green), and by (2) minimum electrode size required to avoid large current densities (red)." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 4:</b> The number of electrodes for CIs is limited mainly by (1) <em>current spread</em> (green), and by (2) minimum electrode size required to avoid large <em>current densities</em> (red).</figcaption></div></figure><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-35c2e01a9462472a9fcbd873b904bafd" data-id="35c2e01a9462472a9fcbd873b904bafd"><span><div id="35c2e01a9462472a9fcbd873b904bafd" class="notion-header-anchor"></div><a class="notion-hash-link" href="#35c2e01a9462472a9fcbd873b904bafd" title="Possible Solutions: Time Domain or Frequency Domain"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title"><b>Possible Solutions: Time Domain or Frequency Domain</b></span></span></h3><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-652a3e948a704c3b9ab09ee5579a6243"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:480px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fb6b4eed9-8ead-447f-adb1-b6a7f03a2a13%2FUntitled.png?table=block&amp;id=652a3e94-8a70-4c3b-9ab0-9ee5579a6243" alt="Figure 5: For each time window, the stimulation intensity of each electrode can be worked out either in the Time Domain or in the Frequency Domain." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 5:</b> For each time window, the stimulation intensity of each electrode can be worked out either in the <em>Time Domain</em> or in the <em>Frequency Domain</em>.</figcaption></div></figure><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-c969345c4e6940fab7fd2f4f53bfd54b" data-id="c969345c4e6940fab7fd2f4f53bfd54b"><span><div id="c969345c4e6940fab7fd2f4f53bfd54b" class="notion-header-anchor"></div><a class="notion-hash-link" href="#c969345c4e6940fab7fd2f4f53bfd54b" title="(a) Time Domain: Linear Filters - Gamma Tones"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title"><b>(a) Time Domain: Linear Filters - Gamma Tones</b></span></span></h4><div class="notion-text notion-block-edd7ec1a823d41588b67edac611e2f5f">In fact, the basilar membrane does not behave like a Fourier Transform. To a first approximation, Gamma tones describe the frequency response of the basilar membrane quite well. They can be well -- and very efficiently -- be approximated by <em>IIR-Filters. </em>Note the &quot;Multi-resolution behavior&quot; of the gamma tones: high frequencies decay at a much faster (time)scale than the lower frequencies:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-9b64bc0e706247b5be5ecda29cdf2381"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:480px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fa456c713-7c4b-43e7-ac16-66d09908ccf0%2FUntitled.png?table=block&amp;id=9b64bc0e-7062-47b5-be5e-cda29cdf2381" alt="Figure 6: Effect of a sound impulse at four different locations on the basilar membrane." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 6:</b> Effect of a sound impulse at four different locations on the basilar membrane.</figcaption></div></figure><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-ea8ab0972970418fb71bc2944ba318b6" data-id="ea8ab0972970418fb71bc2944ba318b6"><span><div id="ea8ab0972970418fb71bc2944ba318b6" class="notion-header-anchor"></div><a class="notion-hash-link" href="#ea8ab0972970418fb71bc2944ba318b6" title="(b) Frequency Domain: Powerspectrum"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title"><b>(b) Frequency Domain: Powerspectrum</b></span></span></h4><div class="notion-text notion-block-6205d842507348bd8b2897a49830c9e7">One way to characterize the frequency dependence of a signal is to use a <em>Fourier Transformation (FFT).</em></div><ul class="notion-list notion-list-disc notion-block-94994215ed794bb284458ccadc6ddc00"><li>The equation for the frequency components of an FFT is  <span role="button" tabindex="0" class="notion-equation notion-equation-inline"><span></span></span>, where &quot;N&quot; is the number of data-points, &quot;Ts&quot; the sampling-interval, and &quot;n&quot; the running index (1:N)</li></ul><ul class="notion-list notion-list-disc notion-block-6d02e8f6670c464281303df0f5674a34"><li>Frequencies on the basilar membrane are arranged approximately logarithmically. Take this fact into consideration.</li></ul><hr class="notion-hr notion-block-9b413372dc2a4df7841dc789bf032c83"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-71d113e78f7f44708cf62e9879269291" data-id="71d113e78f7f44708cf62e9879269291"><span><div id="71d113e78f7f44708cf62e9879269291" class="notion-header-anchor"></div><a class="notion-hash-link" href="#71d113e78f7f44708cf62e9879269291" title="Exercise Description: Simulation of Cochlea-Implants"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title"><b>Exercise Description: Simulation of Cochlea-Implants</b></span></span></h2><h4 class="notion-h notion-h3 notion-h-indent-1 notion-block-fdbefe46b7904bb1b19db0a7d8b57d5e" data-id="fdbefe46b7904bb1b19db0a7d8b57d5e"><span><div id="fdbefe46b7904bb1b19db0a7d8b57d5e" class="notion-header-anchor"></div><a class="notion-hash-link" href="#fdbefe46b7904bb1b19db0a7d8b57d5e" title="Write a function that simulates cochlea implants:"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Write a function that simulates cochlea implants:</span></span></h4><ul class="notion-list notion-list-disc notion-block-715799868cf0466dacc96c53e846399c"><li>Your function should allow the user to interactively (=graphically) select an audio-file.</li></ul><ul class="notion-list notion-list-disc notion-block-8d3d912a1301479fb1575648c7b3fc45"><li>Your function should produce as output a WAV-file, containing the CI-simulation corresponding to the selected audio-file.</li></ul><ul class="notion-list notion-list-disc notion-block-a5b764526fce4834aebbdd21fdee0de0"><li>The simulated CI should have approximately the following specifications:</li><ul class="notion-list notion-list-disc notion-block-a5b764526fce4834aebbdd21fdee0de0"><li><span role="button" tabindex="0" class="notion-equation notion-equation-inline"><span></span></span> = 200 Hz</li><li><span role="button" tabindex="0" class="notion-equation notion-equation-inline"><span></span></span> = 500 - 5000 Hz</li><li>numElectrodes = 20</li><li>StepSize = 5 - 20 ms</li></ul></ul><ul class="notion-list notion-list-disc notion-block-85a47a9b3547480285a3dbd515b6efeb"><li>Provide your program with an option to try out the <em>n-out-of-m</em> strategy used in real cochlear implants: at any given time, only those <em>n</em> (typically 6) of all <em>m</em> available (typically around 20) electrodes are activated, which have the largest stimulation.</li></ul><h4 class="notion-h notion-h3 notion-h-indent-1 notion-block-40405290b0ae4380988cdc252301da8d" data-id="40405290b0ae4380988cdc252301da8d"><span><div id="40405290b0ae4380988cdc252301da8d" class="notion-header-anchor"></div><a class="notion-hash-link" href="#40405290b0ae4380988cdc252301da8d" title="Possible Program Structure:"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Possible Program Structure:</span></span></h4><ul class="notion-list notion-list-disc notion-block-01fdc2abd7ba45daa527aee4ba03d636"><li>Select file</li></ul><ul class="notion-list notion-list-disc notion-block-8556fd4701c54f5e905d649fe6e8ed8f"><li>Read data (mp3, wav)</li></ul><ul class="notion-list notion-list-disc notion-block-77ecf13a460c4a35901e7957ef684c40"><li>Set parameters<!-- -->number of electrodes<!-- -->frequency range<!-- -->Window size (sec)<!-- -->[Window type]<!-- -->[sample rate]</li></ul><ul class="notion-list notion-list-disc notion-block-6bd3b8ca5d7b4b5d9f4d8547614d33bd"><li>[Calculate<!-- -->Window size (points)<!-- -->Electrode locations]</li></ul><ul class="notion-list notion-list-disc notion-block-cceb93eb820440fb968b55edd3f36104"><li>Allocate memory</li></ul><ul class="notion-list notion-list-disc notion-block-a3b1ad0bfa904148ae5e51ff24f07b27"><li>For [all data]<!-- -->Get data<!-- -->Calculate stimulation strength<!-- -->Set/write output values</li></ul><ul class="notion-list notion-list-disc notion-block-99d677e7f38a4362b6228690fc165494"><li>Play result</li></ul><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-2005cdaf0cf74119b7b3c44585ef2bad"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:480px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F90d79bf8-1b97-43e0-b00c-c1495b1a5202%2FUntitled.png?table=block&amp;id=2005cdaf-0cf7-4119-b7b3-c44585ef2bad" alt="Figure 7: Possible workflow." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 7:</b> Possible workflow.</figcaption></div></figure><hr class="notion-hr notion-block-94edb83857f34cc4a3a4147dc8f732c9"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-b45f31342e1144949752b5100b0b6c13" data-id="b45f31342e1144949752b5100b0b6c13"><span><div id="b45f31342e1144949752b5100b0b6c13" class="notion-header-anchor"></div><a class="notion-hash-link" href="#b45f31342e1144949752b5100b0b6c13" title="Implementation"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Implementation</span></span></h2><div class="notion-text notion-block-cfd09f51fec14171a26a3fd44239861a">In the implementation, we strictly follow the principles which have been mentioned before. </div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-c5abf5c36a9745e7a622d468f48a80f2" data-id="c5abf5c36a9745e7a622d468f48a80f2"><span><div id="c5abf5c36a9745e7a622d468f48a80f2" class="notion-header-anchor"></div><a class="notion-hash-link" href="#c5abf5c36a9745e7a622d468f48a80f2" title="Read in the data"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Read in the data</span></span></h3><div class="notion-text notion-block-11eebe810fb64bc3839c4cc32e0d96ad">First, we need to read in the original sound. In order to process the data more easily, we take the average of multiple channels.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Read data.
sound = sounds.Sound()
if len(sound.data.shape) &gt; 1:
    # Average on multiple channels.
    sound.data = np.mean(sound.data, axis=1)</code></pre><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-3ef9a5a5795e467eb14ef8f5b0e7f158" data-id="3ef9a5a5795e467eb14ef8f5b0e7f158"><span><div id="3ef9a5a5795e467eb14ef8f5b0e7f158" class="notion-header-anchor"></div><a class="notion-hash-link" href="#3ef9a5a5795e467eb14ef8f5b0e7f158" title="Parameter settings"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Parameter settings</span></span></h3><div class="notion-text notion-block-713af4c244a04086aa1246748069847d">We set the parameters according to the suggestions in the document of requirements. To make the parameter setting more modularized, we set the parameters in a YAML file and create a class to structurally contain the parameters. The parameter settings are:</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-yaml">ex1:
  m_electrodes: 22  # Total number of electrodes
  n_channels: 9  # Number of activated channels
  lo_freq: 200  # Hz
  hi_freq: 5000  # Hz
  period_window: 6.0e-3  # sec
  step_window: 5.0e-4  # sec
  type_window: &#x27;moore&#x27;  # Filter window type</code></pre><div class="notion-text notion-block-db35878280544ca3a16f2a4120018fb3">What’s more, we also create a graphical interface to allow the users change the parameters interactively. </div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Set parameters. Load the parameters from a YAML script.
path_crt = os.getcwd()
Params = Params_audio(path_options=os.path.join(path_crt, &#x27;options.yaml&#x27;))

# Create a graphical interface to visualize and change the parameter settings.
Params = gui_params(Params=Params)

# Calculate other parameters.
size_window = round((Params.period_window/sound.duration)*sound.totalSamples)  # window size (points)
size_step = round((Params.step_window/sound.duration)*sound.totalSamples)  # step size (points)</code></pre><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-dacfb603cfca44c0a7acafe9b4267733" data-id="dacfb603cfca44c0a7acafe9b4267733"><span><div id="dacfb603cfca44c0a7acafe9b4267733" class="notion-header-anchor"></div><a class="notion-hash-link" href="#dacfb603cfca44c0a7acafe9b4267733" title="Apply filters to the original audio"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Apply filters to the original audio</span></span></h3><div class="notion-text notion-block-598ebf9ffb6246cea77fd1ffd1ed96c0">In this process, we set up a series of IIR filter banks and then apply them to the original audio. Next, we simulate the n-out-of-m strategy. The principle is, during the proccessing time, only n (which have the strongest stimulations) out of m electrodes (all available electrodes) will be activated. In this way, we need to calculate the signal intensities of each electrode and only select a part of them to activate. Then, we need to calculate the corresponding amplitudes for the subsequent signal reconstruction process.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Set up IIR filter banks.
(forward, feedback, fcs, ERB, B) = GTS.GT_coefficients(
    fs=sound.rate, n_channels=Params.m_electrodes, 
    lo_freq=Params.lo_freq, hi_freq=Params.hi_freq, method=Params.type_window
    )
    
# Apply filter banks on the whole audio.
data_filtered = GTS.GT_apply(sound.data, forward, feedback)

# Simulate the n_out_of_m process.
amp_n_out_m = n_out_m(
    data_audio=data_filtered, size_window=size_window, 
    size_step=size_step, Params=Params
    )</code></pre><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-083ac7364ea3449f886d208fb42cc4ac" data-id="083ac7364ea3449f886d208fb42cc4ac"><span><div id="083ac7364ea3449f886d208fb42cc4ac" class="notion-header-anchor"></div><a class="notion-hash-link" href="#083ac7364ea3449f886d208fb42cc4ac" title="Audio reconstruction"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Audio reconstruction</span></span></h3><div class="notion-text notion-block-c49d52ed312c4bcea1d631a387227c3a">After the process of n-out-of-m strategy, we get the amplitudes of each frequency band during a time period. In order to reconstruct the output audio, we can make use of sine/cosine functions with these specific amplitudes. After generating sine/cosine functions, add them linearly to get final result. </div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Audio reconstruction.
y_output = reconstruction(
    amps=amp_n_out_m, size_step=size_step, totalSamples=sound.totalSamples, 
    duration=sound.duration, fcs=fcs
    )
# Write to output file
sound_output = sounds.Sound(inData=y_output, inRate=sound.rate)
sound_output.write_wav(full_out_file=os.path.join(path_crt, &#x27;output.wav&#x27;))</code></pre><hr class="notion-hr notion-block-8f0f4d3197104faabdf5372d50ee84b6"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-0884912df5ea419d8cb6ad9bec423e72" data-id="0884912df5ea419d8cb6ad9bec423e72"><span><div id="0884912df5ea419d8cb6ad9bec423e72" class="notion-header-anchor"></div><a class="notion-hash-link" href="#0884912df5ea419d8cb6ad9bec423e72" title="Demo"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Demo</span></span></h2><div class="notion-text notion-block-207b1458639f469c9555fceb2ca6a741">Here we present several examples of the simulation results. We use two original sounds provided by the requirement document and compare them with the reconstruction outputs. The parameter settings are the same with as before.</div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-34c265ba4b284093a5c53b73dc0270b2" data-id="34c265ba4b284093a5c53b73dc0270b2"><span><div id="34c265ba4b284093a5c53b73dc0270b2" class="notion-header-anchor"></div><a class="notion-hash-link" href="#34c265ba4b284093a5c53b73dc0270b2" title="tiger.wav"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">tiger.wav</span></span></h3><ul class="notion-list notion-list-disc notion-block-c52cab55d92244588fb5ac964a168dad"><li>Original:</li></ul><div class="notion-audio notion-block-ba3b1b68464544fca16584ac0b9bcf50"><audio controls="" preload="none" src="https://file.notion.so/f/s/99cd9d1f-df42-400c-ae43-418a0ce5ce45/tiger.wav?id=ba3b1b68-4645-44fc-a165-84ac0b9bcf50&amp;table=block&amp;spaceId=7603c8c0-d859-479e-89fa-8c7beb0d0987&amp;expirationTimestamp=1698984000000&amp;signature=tbYPZonuLvveg5zmVT2tN9CA--Z8YC4CD_Kg4UzbDrE"></audio></div><ul class="notion-list notion-list-disc notion-block-d4b69a1bef104bc893cd9f605569d083"><li>Simulation:</li></ul><div class="notion-audio notion-block-d232d8b67a4a4882a8f44e4da7c6697c"><audio controls="" preload="none" src="https://file.notion.so/f/s/9af638c9-c587-407e-8353-ddaa58cfa2ad/output.wav?id=d232d8b6-7a4a-4882-a8f4-4e4da7c6697c&amp;table=block&amp;spaceId=7603c8c0-d859-479e-89fa-8c7beb0d0987&amp;expirationTimestamp=1698984000000&amp;signature=97fcgnpQvCFu2gKYW7Src16pdCwkfKY5erIrCyX2kic"></audio></div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-6d4d9dd0ce6d4dac8b65dc2fa1c8c6a3" data-id="6d4d9dd0ce6d4dac8b65dc2fa1c8c6a3"><span><div id="6d4d9dd0ce6d4dac8b65dc2fa1c8c6a3" class="notion-header-anchor"></div><a class="notion-hash-link" href="#6d4d9dd0ce6d4dac8b65dc2fa1c8c6a3" title="scales.wav"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">scales.wav</span></span></h3><ul class="notion-list notion-list-disc notion-block-293ff4aff2e340a39fc3254f5526ae7b"><li>Original:</li></ul><div class="notion-audio notion-block-93c931f2fd5d487697e15687a73b9cb1"><audio controls="" preload="none" src="https://file.notion.so/f/s/3a0410f0-3d2a-4d3c-8444-1a6d5c6b4e7a/scales.wav?id=93c931f2-fd5d-4876-97e1-5687a73b9cb1&amp;table=block&amp;spaceId=7603c8c0-d859-479e-89fa-8c7beb0d0987&amp;expirationTimestamp=1698984000000&amp;signature=YMaJwaYe22aQjB58o1pzuHUCToBQ7VK8UtnZ3Qwl-l0"></audio></div><ul class="notion-list notion-list-disc notion-block-7261c453fcd44fa2bc8367e5c9e746aa"><li>Simulation:</li></ul><div class="notion-audio notion-block-0105346e30bd4a02a4e5c70eebe83a04"><audio controls="" preload="none" src="https://file.notion.so/f/s/0eec19b8-d6da-4820-bc34-e3e729ef6c97/output.wav?id=0105346e-30bd-4a02-a4e5-c70eebe83a04&amp;table=block&amp;spaceId=7603c8c0-d859-479e-89fa-8c7beb0d0987&amp;expirationTimestamp=1698984000000&amp;signature=twCYj9-NmCnU-HTXXp3gJ2Rc5g29Nel6pQISqRRxQSo"></audio></div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-2ee655ec7b634210a72fc788a38099f9" data-id="2ee655ec7b634210a72fc788a38099f9"><span><div id="2ee655ec7b634210a72fc788a38099f9" class="notion-header-anchor"></div><a class="notion-hash-link" href="#2ee655ec7b634210a72fc788a38099f9" title="harmony.wav"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">harmony.wav</span></span></h3><ul class="notion-list notion-list-disc notion-block-cb9938bc57c64cc49828904097a18e23"><li>Original:</li></ul><div class="notion-audio notion-block-08e93a183efc44b8a08f3fbe6c12720a"><audio controls="" preload="none" src="https://file.notion.so/f/s/efa00e68-5cef-4a25-b2d6-f21f57e6eaf7/harmony.wav?id=08e93a18-3efc-44b8-a08f-3fbe6c12720a&amp;table=block&amp;spaceId=7603c8c0-d859-479e-89fa-8c7beb0d0987&amp;expirationTimestamp=1698984000000&amp;signature=M7zlZUGudPdypvJelIszS05b6_6EEpxZ_HMmfhZZutM"></audio></div><ul class="notion-list notion-list-disc notion-block-2df66a41d00e4fe984118ff93b6f7844"><li>Simulation:</li></ul><div class="notion-audio notion-block-1c98c56489624fff82bcae60a6fc4481"><audio controls="" preload="none" src="https://file.notion.so/f/s/07e2aaff-f652-41f0-bed9-bc6611c7b435/output.wav?id=1c98c564-8962-4fff-82bc-ae60a6fc4481&amp;table=block&amp;spaceId=7603c8c0-d859-479e-89fa-8c7beb0d0987&amp;expirationTimestamp=1698984000000&amp;signature=rDLI9N-0LbyeJWbeAKe0SPI40aS79hJbAbOJ7DJGmf4"></audio></div><div class="notion-text notion-block-c2a038b2240b4f5e9fde69e325e3aef8">According to the results of comparison, we can conclude that our simulation results can reconstruct the original sounds with a relatively low quality. One of the reasons is that we apply the n-out-of-m stategy during the simulation process. So only several frequency bands are activated.</div><div class="notion-text notion-block-65163e2152a6434196910450f5a75d05">In the interest of brevity, we only present the main structure of the simulation process. The source code can be found in my <a class="notion-link" href="/7a67db29735f4c509282a9173b0d1a17">GitHub</a>.</div><div class="notion-blank notion-block-9275bc1535904d1faf4fc383f443443b"> </div></main></div>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Optimal Facial Regions for RPPG Algorithms in HR Estimation]]></title>
            <link>https://shuoli199909.com/article/eth-rp</link>
            <guid>https://shuoli199909.com/article/eth-rp</guid>
            <pubDate>Tue, 02 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Summary of Shuo Li’s research project in the BMHT lab.]]></description>
            <content:encoded><![CDATA[<div id="container" class="mx-auto undefined"><main class="notion light-mode notion-page notion-block-f66a6d15603f455e958f26aac79f0a00"><div class="notion-viewport"></div><div class="notion-collection-page-properties"></div><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-22eb044f24b4448bb19aff2072840d79" data-id="22eb044f24b4448bb19aff2072840d79"><span><div id="22eb044f24b4448bb19aff2072840d79" class="notion-header-anchor"></div><a class="notion-hash-link" href="#22eb044f24b4448bb19aff2072840d79" title="Brief Information"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Brief Information</span></span></h2><div class="notion-text notion-block-964754a7478e44b3b0bc2e378d8de882">It is a research project (2023/05/16-2023/10/02) conducted by Shuo Li in the BMHT Lab. This project was continuously supervised by <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://scholar.google.fi/citations?user=-WFwzjoAAAAJ&amp;hl=en">Dr. Moe Elgendi</a> and the project advisor was <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://0-scholar-google-com.brum.beds.ac.uk/citations?user=okZnorwAAAAJ&amp;hl=en">Prof. Dr. Carlo Menon</a>. The main background of this project is using rPPG algorithms for heart rate (HR) estimation. Our main goal is to compare the performance of different facial ROIs and try to reach out some consistent conclusions. More details can be in the following links:</div><ul class="notion-list notion-list-disc notion-block-f5b0f1dcf70e4668ba35f4f87238f0e5"><li>Written Thesis:</li><ul class="notion-list notion-list-disc notion-block-f5b0f1dcf70e4668ba35f4f87238f0e5"><div class="notion-text notion-block-14531a4b7e944cadaf830b5a52a65d25"><a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://1drv.ms/b/s!Ajfo_rNF5Ay-gYwGKXJBBZo0jURaPQ?e=V2EAhv">ProjectThesis_shuoli.pdf</a></div></ul></ul><ul class="notion-list notion-list-disc notion-block-e5a39325ef0c4c4787e282d3947d32f2"><li>Slides (for presentation):</li><ul class="notion-list notion-list-disc notion-block-e5a39325ef0c4c4787e282d3947d32f2"><div class="notion-text notion-block-2c3edef423ab466293f6b6e73196d73d"><a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://1drv.ms/b/s!Ajfo_rNF5Ay-gY15dseVUuZ9cvto4g?e=f3FRgy">OptimalROI_Pre_shuoli.pdf</a></div></ul></ul><ul class="notion-list notion-list-disc notion-block-f374038f6193449fa8445efd5d04cd11"><li>Literature Review (Under Review in <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://www.nature.com/commsmed">Communication Medicine</a>)</li><ul class="notion-list notion-list-disc notion-block-f374038f6193449fa8445efd5d04cd11"><div class="notion-text notion-block-f5048ecf4cb1444d8f6f5229f00c5b24"><a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://mts-commsmed.nature.com/commsmed_files/2023/09/28/00002074/00/2074_0_art_file_666062_s1grzw_convrt.pdf">OptimalROI_Review_shuoli.pdf</a></div></ul></ul><ul class="notion-list notion-list-disc notion-block-2f8104afd1d94d2a8be5639ecdb580de"><li>Optimal ROIs for different subject’s movements (Under Preparation):</li><ul class="notion-list notion-list-disc notion-block-2f8104afd1d94d2a8be5639ecdb580de"><div class="notion-text notion-block-48be5a58059244fc8be331ae89e6d9c1"><a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://1drv.ms/b/s!Ajfo_rNF5Ay-gY4EvgbdArfAgr_a5Q?e=UM3iPE">OptimalROI_Motion_shuoli.pdf</a></div></ul></ul><ul class="notion-list notion-list-disc notion-block-3e9963f67ab24b82a2555a25e4687827"><li>Source Code:</li><ul class="notion-list notion-list-disc notion-block-3e9963f67ab24b82a2555a25e4687827"><div class="notion-text notion-block-a3c1f91d78604d27ab2ee25bf96a54e1">The source code of this research project has been uploaded to my <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://github.com/shuoli199909/optimal_roi_rppg">GitHub</a>. 🧐</div></ul></ul></main></div>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Simulation of a Vestibular Implant]]></title>
            <link>https://shuoli199909.com/article/eth-cs-ex2</link>
            <guid>https://shuoli199909.com/article/eth-cs-ex2</guid>
            <pubDate>Tue, 02 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Simulation of Sensory Systems (Exercise 2: The Vestibular System).]]></description>
            <content:encoded><![CDATA[<div id="container" class="mx-auto undefined"><main class="notion light-mode notion-page notion-block-c20bc1c4c5014fe2aa9f97cc60fbfbdb"><div class="notion-viewport"></div><div class="notion-collection-page-properties"></div><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-dabed2520151406ab34128df994518a2" data-id="dabed2520151406ab34128df994518a2"><span><div id="dabed2520151406ab34128df994518a2" class="notion-header-anchor"></div><a class="notion-hash-link" href="#dabed2520151406ab34128df994518a2" title="Background"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Background</span></span></h2><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-ac7e69a76b4f4c15985239d4786ed09a"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:240px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F375af3e2-9030-4c85-8848-d518e028b4a1%2FUntitled.png?table=block&amp;id=ac7e69a7-6b4f-4c15-9852-39d4786ed09a" alt="Figure 1: A diagram of the vestibular system." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 1: A diagram of the vestibular system.</b></figcaption></div></figure><div class="notion-text notion-block-57887c8e3a7f4c9b8c72d0d97ebd619e">The vestibular system is a sensory system that is critically important in humans for gaze and image stability as well as postural control. Patients with complete bilateral vestibular loss are severely disabled and experience a poor quality of life. There are very few effective treatment options for patients with no vestibular function. Over the last 10 years, rapid progress has been made in developing artificial &#x27;vestibular implants&#x27; or &#x27;prostheses&#x27;, based on cochlear implant technology. As of 2019, 19 patients worldwide have received vestibular implants and the results are encouraging. Vestibular implants are now becoming part of an increasing effort to develop artificial, bionic sensory systems.</div><hr class="notion-hr notion-block-249fa61152a14902be7054e9949d6a22"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-b3a3077d6d8b4b5990b8d554dd717011" data-id="b3a3077d6d8b4b5990b8d554dd717011"><span><div id="b3a3077d6d8b4b5990b8d554dd717011" class="notion-header-anchor"></div><a class="notion-hash-link" href="#b3a3077d6d8b4b5990b8d554dd717011" title="Exercise Description: Simulation of a Vestibular Implant"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title"><b>Exercise Description: Simulation of a Vestibular Implant</b></span></span></h2><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-352aa813851b4d5ba8c43e32f78de575" data-id="352aa813851b4d5ba8c43e32f78de575"><span><div id="352aa813851b4d5ba8c43e32f78de575" class="notion-header-anchor"></div><a class="notion-hash-link" href="#352aa813851b4d5ba8c43e32f78de575" title="Data"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Data</span></span></h3><div class="notion-text notion-block-665df388e31e4daea5f48e69ee545226"><em>The files </em><code class="notion-inline-code"><em>Walking_01.txt</em></code><em>and </em><code class="notion-inline-code"><em>Walking_02.txt</em></code><em>contain the linear accelerations and angular velocities recorded when walking in a figure-of-eight, around a table and a chair.</em></div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-12fc68deb4b4423d90b138bad019574a" data-id="12fc68deb4b4423d90b138bad019574a"><span><div id="12fc68deb4b4423d90b138bad019574a" class="notion-header-anchor"></div><a class="notion-hash-link" href="#12fc68deb4b4423d90b138bad019574a" title="Methods"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Methods</span></span></h3><ul class="notion-list notion-list-disc notion-block-2487cc1ffbf248508f91711c113da44c"><li><em>For measuring the movement a sensor by </em><a target="_blank" rel="noopener noreferrer" class="notion-link" href="http://www.xsens.com/en/general/mtx"><em>XSens</em></a><em> was used, with the orientation on the head as indicated in the figure below.</em></li></ul><ul class="notion-list notion-list-disc notion-block-50f7a9d798d249afa1e63746f0dc9977"><li><em>At the start of the recordings, the subject was stationary, and the head was in an orientation with Reid&#x27;s line 15 deg nose up.The sensor orientation on the head is such that at t=0, the &quot;shortest rotation that aligns the y-axis of the sensor with gravity&quot; brings the sensor into such an orientation that the (x/ -z / y) axes of the sensor aligns with the space-fixed (x/y/z) axes. This requirement uniquely determines the sensor-orientation at t=0.</em></li></ul><ul class="notion-list notion-list-disc notion-block-f17bdc2d2a3b4354adabf29cd50b82e0"><li><em>Data were sampled at 50 Hz.</em></li></ul><ul class="notion-list notion-list-disc notion-block-5b1504c3d4b24e76a23c7f68bbdbb762"><li><em>The acceleration data are given in m/s2, the angular velocity data in rad/s.</em></li></ul><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-a128d16ba5eb4644a254a7c058d7bf8c"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:288px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F061f5303-dd57-4adf-b0dd-30b14d223aba%2FUntitled.png?table=block&amp;id=a128d16b-a5eb-4644-a254-a7c058d7bf8c" alt="Figure 2: A diagram of the walking process and the settings of coordinate systems." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 2: A diagram of the walking process and the settings of coordinate systems.</b></figcaption></div></figure><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-db21c6b27ed54a3898ffb848231ec0a0" data-id="db21c6b27ed54a3898ffb848231ec0a0"><span><div id="db21c6b27ed54a3898ffb848231ec0a0" class="notion-header-anchor"></div><a class="notion-hash-link" href="#db21c6b27ed54a3898ffb848231ec0a0" title="Task 1: Simulate the vestibular neural response"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 1: Simulate the vestibular neural response</span></span></h3><div class="notion-text notion-block-3efa8fc8740b4c8fa9383f083537737d"><em>Simulate the neural vestibular responses during walking, using only the 3D-linear-acceleration and the 3D-angular-velocity from the file </em><code class="notion-inline-code"><em>Walking_02.txt</em></code><em>:</em></div><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-3835bfeceda744b190ba55f19750e8b3" data-id="3835bfeceda744b190ba55f19750e8b3"><span><div id="3835bfeceda744b190ba55f19750e8b3" class="notion-header-anchor"></div><a class="notion-hash-link" href="#3835bfeceda744b190ba55f19750e8b3" title="Right horizontal semicribular canal"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Right horizontal semicribular canal</span></span></h4><div class="notion-text notion-block-a11a7d76936645b184790cc48c218394"><em>Calculate the maximum cupular displacements (positive and negative). These two values should be written into the text file </em><code class="notion-inline-code"><em>CupularDisplacement.txt</em></code><em>.</em></div><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-e467c885173a4846a68aa30e8f88de98" data-id="e467c885173a4846a68aa30e8f88de98"><span><div id="e467c885173a4846a68aa30e8f88de98" class="notion-header-anchor"></div><a class="notion-hash-link" href="#e467c885173a4846a68aa30e8f88de98" title="Otolith haircell"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Otolith haircell</span></span></h4><div class="notion-text notion-block-0039a30b850e4202b39d7477369b754c"><em>Assume that an otolith hair-cell has at t=0 the on-direction=[0 1 0] in space fixed coordinates (i.e. pointing to the left as seen from the subject).  Calculate the minimum and maximum acceleration along this direction, in m/s^2 and write this value to the text file </em><code class="notion-inline-code"><em>MaxAcceleration.txt</em></code><em>.</em></div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-85db3ebfa8024999860fd074f70563ae" data-id="85db3ebfa8024999860fd074f70563ae"><span><div id="85db3ebfa8024999860fd074f70563ae" class="notion-header-anchor"></div><a class="notion-hash-link" href="#85db3ebfa8024999860fd074f70563ae" title="Task 2: Calculate the “nose-direction” during the movement"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 2: Calculate the “nose-direction” during the movement</span></span></h3><div class="notion-text notion-block-aadbbef7584c4d1f8a989c074604e5b0">Calculate the orientation of the “Nose”-vector (as indicated in the figure) at the end of walking the loop (for <code class="notion-inline-code"><em>Walking_02.txt</em></code>),</div><ul class="notion-list notion-list-disc notion-block-38982b9d72634ecbaabf274ef6721b09"><li>based on the angular velocity readings,</li></ul><ul class="notion-list notion-list-disc notion-block-904d8935b66c417f974e2cb3b392daf4"><li>and assuming that at t=0: nose=[1,0,0].</li></ul><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-58236688be994cad8fe0f857338c3ec5" data-id="58236688be994cad8fe0f857338c3ec5"><span><div id="58236688be994cad8fe0f857338c3ec5" class="notion-header-anchor"></div><a class="notion-hash-link" href="#58236688be994cad8fe0f857338c3ec5" title="Input data"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Input data</span></span></h3><div class="notion-text notion-block-3631544b168046ff9190a114fe73f146">The folder <code class="notion-inline-code"><em>MovementData</em></code><em> contains the following files:</em></div><ul class="notion-list notion-list-disc notion-block-4db11c84c7da464cb35b20f628ef18a5"><li><code class="notion-inline-code"><em>Walking_01.txt</em></code><em>, </em><code class="notion-inline-code"><em>Walking_02.txt</em></code><em>: Linear acceleration and angular velocity recordings from human subject while walking around.</em></li></ul><ul class="notion-list notion-list-disc notion-block-4bf2c965f570481a90071484e8df36f7"><li><code class="notion-inline-code"><em>SCC_Humans.m</em></code><em>: Orientations of the human semicircular canals, with respect to &quot;Reid&#x27;s plane&quot;. (Reid&#x27;s plane is the plane defined by the lower rim of the orbia, and the center of the external auditory meatus. In &quot;English&quot;: the bottom of the eyes, and the middle of the ears.)</em></li></ul><hr class="notion-hr notion-block-87fa5be84fec47e2a59d65c1e03709bc"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-cfcbbe97dbfe47c581b93d4cb50dc564" data-id="cfcbbe97dbfe47c581b93d4cb50dc564"><span><div id="cfcbbe97dbfe47c581b93d4cb50dc564" class="notion-header-anchor"></div><a class="notion-hash-link" href="#cfcbbe97dbfe47c581b93d4cb50dc564" title="Tips"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Tips</span></span></h2><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-af2d85900e9c4a608cfde8c996e8e85a" data-id="af2d85900e9c4a608cfde8c996e8e85a"><span><div id="af2d85900e9c4a608cfde8c996e8e85a" class="notion-header-anchor"></div><a class="notion-hash-link" href="#af2d85900e9c4a608cfde8c996e8e85a" title="Parameters"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Parameters</span></span></h3><ul class="notion-list notion-list-disc notion-block-ca19449991074b60ab33fda41f73cf67"><li><em>The radius of the human semicircular canals is 3.2 mm.</em></li></ul><ul class="notion-list notion-list-disc notion-block-c031f53bceb442099fddb77e8bea8fbf"><li><em>For the given input, the magnitude of the cupular displacement is about +/- 0.12 mm.</em></li></ul><ul class="notion-list notion-list-disc notion-block-9deb8537098f411a93ab8f564b90bea5"><li><em>For the given input, the magnitude of the maximum acceleration along this axis is between -5.63 m/s^2 and +6.87 m/s^2</em></li></ul><hr class="notion-hr notion-block-c1d965d0ce6a463c9ad10034f84e7bf9"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-ab9f5ebbdc9d41f6ab1f1dc4018f3662" data-id="ab9f5ebbdc9d41f6ab1f1dc4018f3662"><span><div id="ab9f5ebbdc9d41f6ab1f1dc4018f3662" class="notion-header-anchor"></div><a class="notion-hash-link" href="#ab9f5ebbdc9d41f6ab1f1dc4018f3662" title="Implementation"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Implementation</span></span></h2><div class="notion-text notion-block-2887853b85c44a529675698cd889ad49">In the implementation, we divide the simulation into two parts: Task 1 and Task 2.</div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-04ebf8ceb99f40f0a7dd79d78406b3b4" data-id="04ebf8ceb99f40f0a7dd79d78406b3b4"><span><div id="04ebf8ceb99f40f0a7dd79d78406b3b4" class="notion-header-anchor"></div><a class="notion-hash-link" href="#04ebf8ceb99f40f0a7dd79d78406b3b4" title="Read in the data"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Read in the data</span></span></h3><div class="notion-text notion-block-901ac4ecb09246d1bb7001e779f579b3">Before implementing the simulations of task 1 and task 2, we need to read in the movement data.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Load sensor data
dir_data = os.getcwd()
data_sensor = XSens(os.path.join(dir_data, &#x27;MovementData&#x27;, &#x27;Walking_02.txt&#x27;))</code></pre><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-0c728c9c2e3f44d1be1153fc96c1d190" data-id="0c728c9c2e3f44d1be1153fc96c1d190"><span><div id="0c728c9c2e3f44d1be1153fc96c1d190" class="notion-header-anchor"></div><a class="notion-hash-link" href="#0c728c9c2e3f44d1be1153fc96c1d190" title="Task 1"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 1</span></span></h3><div class="notion-text notion-block-91161c30625747759a04ad1d83788d44">We set the parameters according to the suggestions in the document of requirements. To make the parameter setting more modularized, we set the parameters in a YAML file and create a class to structurally contain the parameters. The parameter settings are:</div><div class="notion-text notion-block-ee08c18d51444c79b803a27c7629728b">What’s more, we also create a graphical interface to allow the users change the parameters interactively. </div><div class="notion-text notion-block-6f2fd32f1eb943ee92d38f239280c8cd">In task 1, we need to finish two parts of simulations:</div><ul class="notion-list notion-list-disc notion-block-a41eee30bc99439fa2f1e47be7ea6294"><li>Calculate the maximum cupular displacements.</li></ul><ul class="notion-list notion-list-disc notion-block-9c6bd165e58747ab9c173c34d841f008"><li>Calculate the minimum and maximum acceleration along this direction.</li></ul><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-46196d857a9d42c98819983a75ede35e" data-id="46196d857a9d42c98819983a75ede35e"><span><div id="46196d857a9d42c98819983a75ede35e" class="notion-header-anchor"></div><a class="notion-hash-link" href="#46196d857a9d42c98819983a75ede35e" title="Data decomposition"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Data decomposition</span></span></h4><div class="notion-text notion-block-5fcb2fac049c42bc8ad2b48d324484fb">In this process, the pre-loaded data is decomposed and some parameters are calculated.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python">## Data decomposition
acc = data.acc  #  Acceleration
omega = data.omega  #  Angular velocity
rate = data.rate  #  Sample rate
N = data.totalSamples  #  Number of samples

## Parameter calculation
vec_g_hc = np.array([0, 0, -0.981])  #  Gravity in Zurich. Head coordinates.
vec_g_approx = np.dot(sk.rotmat.R(axis=&#x27;x&#x27;, angle=90), vec_g_hc)
vec_ref = acc[0, :]  #  Take acc(t=0) as the reference vector.</code></pre><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-11c37aec5332461a9995a5e051fb212b" data-id="11c37aec5332461a9995a5e051fb212b"><span><div id="11c37aec5332461a9995a5e051fb212b" class="notion-header-anchor"></div><a class="notion-hash-link" href="#11c37aec5332461a9995a5e051fb212b" title="Calculate the cupular displacement"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Calculate the cupular displacement</span></span></h4><div class="notion-text notion-block-9153502ceaaa4749a0a2c18857d1eee7">With the input data and parameters, we can compute the cupular displacement. First, we need to adjust the original angular velocity data from the sensor coordinate system into the head coordinate system. Then, we compute the orientation of the SCCs of humans’ right canal and multiply it with the adjusted angular velocities. The result is the stimulation. Finally, we can calculate the cupular displacement using the stimulus, sampling rate, number of total samples and the radius of SCCs.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python">## Displacement of the cupula
# Adjust the angular velocities.
omega_adjusted = alignment(data=omega, vec_ref=vec_ref, vec_g=vec_g_approx)
# Orientation of the SCCs in humans.
canal_left, canal_right = sccs_humans()
# Calculate the stimulation using the dot product.
stimulus = np.dot(omega_adjusted, canal_right[0, :])
# Calculate deflection
deflection_cupula = cal_deflection(stimulus, rate, N)
# Calculate displacement
radius_canal = 3.2  #  Radius of the SCC
displacement_cupula = deflection_cupula * radius_canal</code></pre><div class="notion-text notion-block-69178389a8f54335b80cb0c93f09aa07">The acceleration data can be calculated by adjusting the original acceleration data into the head coordinate system. The otolith acceleration is in the on-direction [0,1,0].</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python">## Acceleration along the on-direction [0 1 0] in the Head coordinates.
# Adjust the acceleration.
acc_adjusted = alignment(data=acc, vec_ref=vec_ref, vec_g=vec_g_approx)
# Preserve the component in the on-direction [0, 1, 0].
acc_otolith = acc_adjusted[:, 1]</code></pre><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-99d2387773264c7db2e820f49273ae79" data-id="99d2387773264c7db2e820f49273ae79"><span><div id="99d2387773264c7db2e820f49273ae79" class="notion-header-anchor"></div><a class="notion-hash-link" href="#99d2387773264c7db2e820f49273ae79" title="Preserve the results"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Preserve the results</span></span></h4><div class="notion-text notion-block-7e6b74c896eb4c3e90cdacfd81ba73ab">We save the results of task 1 locally. For simplicity, we only preserve the maximum and minimum data.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python">## Write the values in text files.
# Cupular Displacement
path_cupular = os.path.join(dir_crt, &#x27;output/CupularDisplacement.txt&#x27;)
with open(path_cupular, &#x27;w&#x27;, encoding=&#x27;utf-8&#x27;) as f:
    f.writelines(f&#x27;Maximum cupular displacement (positive): \
                 {np.max(displacement_cupula)} mm\n&#x27;)
    f.writelines(f&#x27;Maximum cupular displacement (negative): \
                 {np.min(displacement_cupula)} mm\n&#x27;)
print(&#x27;The maximum cupular displacements have been saved to: &#x27;)
print(path_cupular)
# Acceleration
path_acc = os.path.join(dir_crt, &#x27;output/MaxAcceleration.txt&#x27;)
with open(path_acc, &#x27;w&#x27;, encoding=&#x27;utf-8&#x27;) as f:
    f.write(f&#x27;Maximum acceleration along the direction of the otolith hair cell: \
            {np.max(acc_otolith)} m/s^2\n&#x27;)
    f.write(f&#x27;Minimum acceleration along the direction of the otolith hair cell: \
            {np.min(acc_otolith)} m/s^2\n&#x27;)
print(&#x27;The maximum and minimum acceleration have been saved to: &#x27;)
print(path_acc)</code></pre><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-84b007edc41b4f42aa608b534aec3f10" data-id="84b007edc41b4f42aa608b534aec3f10"><span><div id="84b007edc41b4f42aa608b534aec3f10" class="notion-header-anchor"></div><a class="notion-hash-link" href="#84b007edc41b4f42aa608b534aec3f10" title="Task 2"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 2</span></span></h3><div class="notion-text notion-block-525976c1d048426b9dd750d9a5122401">In task 2, we need to finish two parts of simulations:</div><ul class="notion-list notion-list-disc notion-block-bed17db58c1946ad8a59b746b972387f"><li>Calculate the “nose-direction” during the movement.</li></ul><ul class="notion-list notion-list-disc notion-block-270b3ed14cb049e1bbfcd14c4b15d9e8"><li>Visualize the quaternions to describe the head orientation.</li></ul><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-2542f3c58a3e43258f9637d4cbeceead" data-id="2542f3c58a3e43258f9637d4cbeceead"><span><div id="2542f3c58a3e43258f9637d4cbeceead" class="notion-header-anchor"></div><a class="notion-hash-link" href="#2542f3c58a3e43258f9637d4cbeceead" title="Caculate the nose orientation"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Caculate the nose orientation</span></span></h4><div class="notion-text notion-block-ba2623f06be6473a8f05703224c105f7">In order to calculate the nose orientation, we need to get the head orienation. In the head coordinate system, we can rotate the angular velocities according to the relative pose of the Reid’s line and use quaternions to compute the head orientation. The nose orientation can then be computed based on the head orientation and the fact that “at t=0: nose=[1,0,0]”.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Head orientation
orientation_head = cal_head_orientation(omega_adjusted)
    
# Nose orientation
orientation_nose = []
for i in range(orientation_head.shape[0]):
    R_tmp = sk.quat.convert(orientation_head[i, :], to=&#x27;rotmat&#x27;)
    # t=0: nose=[1 0 0]
    orientation_nose.append(np.matmul(R_tmp, np.array([1,0,0])))
orientation_nose = np.array(orientation_nose)

# Write the values in text files.
path_nose_dir = os.path.join(dir_crt, &#x27;output/Nose_end.txt&#x27;)
with open(path_nose_dir, &#x27;w&#x27;, encoding=&#x27;utf-8&#x27;) as f:
    f.write(f&#x27;The orientation if the &quot;Nose&quot;-vector at the end of the walking loop: \
            {orientation_nose[-1]} m/s^2\n&#x27;)
print(&#x27;The nose vector at the end of the walking loop has been saved to:&#x27;)
print(path_nose_dir)</code></pre><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-99dc8fea591c4fb096c605945c20369d" data-id="99dc8fea591c4fb096c605945c20369d"><span><div id="99dc8fea591c4fb096c605945c20369d" class="notion-header-anchor"></div><a class="notion-hash-link" href="#99dc8fea591c4fb096c605945c20369d" title="Visualize the head orientation"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Visualize the head orientation</span></span></h4><div class="notion-text notion-block-d0cb9e12dc5847e8be1e978a1b966d28">The head orientation is represented in the form of quaternions. We can visualize the head orientation at each moment using a curve graph.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Visualization of the head orientation
plt.plot(np.linspace(0, 20, orientation_head.shape[0]), orientation_head[:, 1], color=&#x27;blue&#x27;)
plt.plot(np.linspace(0, 20, orientation_head.shape[0]), orientation_head[:, 2], color=&#x27;green&#x27;)
plt.plot(np.linspace(0, 20, orientation_head.shape[0]), orientation_head[:, 3], color=&#x27;red&#x27;)
plt.grid(linestyle=&#x27;--&#x27;)
plt.xlim([0, 20])
plt.ylim([-0.4, 1.0])
plt.title(&#x27;Head Orientation&#x27;)
path_img = os.path.join(dir_crt, &#x27;output&#x27;, &#x27;head_orientation.PNG&#x27;)
plt.savefig(path_img)
print(&#x27;The visualization result of the head orientation has been saved to:&#x27;)
print(path_img)</code></pre><h4 class="notion-h notion-h3 notion-h-indent-2 notion-block-f9fe674d68f643a4bd4f912928ac1bb2" data-id="f9fe674d68f643a4bd4f912928ac1bb2"><span><div id="f9fe674d68f643a4bd4f912928ac1bb2" class="notion-header-anchor"></div><a class="notion-hash-link" href="#f9fe674d68f643a4bd4f912928ac1bb2" title="3D visualization of the nose orientation"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">3D visualization of the nose orientation</span></span></h4><div class="notion-text notion-block-b359d685cb7549648793ac259c275cd8">After computing the nose orientation, we visualize the 3D process of the movement. We save the simulation result as a GIF file. In the implementation, we write a seperate function to reconstruct the 3D process.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python">def visualization_nose(orientation_nose, dir_crt):
    &quot;&quot;&quot;3D visualization of the nose orientation.

    Parameters
    ----------
    orientation_nose: Nose orientations during the movements.
    dir_crt: Directory of the current folder.

    Returns
    -------
    
    &quot;&quot;&quot;

    fig = plt.figure()
    # Initialize the 3D coordinates.
    ax = fig.add_subplot(projection=&#x27;3d&#x27;)
    # Visualize the nose vectors dynamically.
    x = np.linspace(-1, 1, 5)
    y = np.linspace(-1, 1, 5)
    X, Y = np.meshgrid(x, y)
    for i in range(len(orientation_nose)):
        # Progress meter
        PSG.one_line_progress_meter(
            &#x27;Video frame generating.&#x27;,
            i+1,
            len(orientation_nose),
            &#x27;&#x27;,
            &#x27;Generating frames of the nose orientation...&#x27;
        )
        # Clear the current figure.
        plt.cla()
        # 3D surfaces for better visualization
        ax.plot_surface(X, Y, Z=X*0, color=&#x27;g&#x27;, alpha=0.2)
        ax.plot_surface(X, Y=X*0, Z=Y, color=&#x27;y&#x27;, alpha=0.2)
        ax.plot_surface(X=X*0, Y=Y, Z=X, color=&#x27;r&#x27;, alpha=0.2)
        # Plot a 3D arrow (Nose).
        vec_tmp = orientation_nose[i]
        ax.quiver(0, 0, 0, 
                  vec_tmp[0], vec_tmp[1], vec_tmp[2], 
                  arrow_length_ratio=0.2, color=&#x27;black&#x27;, normalize=True)
        # Plot settings
        ax.set_xlabel(&#x27;x&#x27;)
        ax.set_ylabel(&#x27;y&#x27;)
        ax.set_zlabel(&#x27;z&#x27;)
        ax.set_xlim(-1.0, 1.0)
        ax.set_ylim(-1.0, 1.0)
        ax.set_zlim(-1.0, 1.0)
        plt.title(&#x27;myNose &#x27; + str(i) + &#x27;/&#x27; + str(len(orientation_nose)))
        path_tmp = os.path.join(dir_crt, &#x27;output&#x27;, &#x27;3D_tmp&#x27;, str(i)+&#x27;.PNG&#x27;)
        plt.savefig(fname=path_tmp)
    # imageio
    gif_images = []
    for i in range(len(orientation_nose)):
        # Progress meter
        PSG.one_line_progress_meter(
            &#x27;GIF generating.&#x27;,
            i+1,
            len(orientation_nose),
            &#x27;&#x27;,
            &#x27;Generating the GIF of the nose orientation...&#x27;
        )
        # GIF images
        path_tmp = os.path.join(dir_crt, &#x27;output&#x27;, &#x27;3D_tmp&#x27;, str(i)+&#x27;.PNG&#x27;)
        gif_images.append(imageio.imread(path_tmp))
    
    # Save the output
    path_save = os.path.join(dir_crt, &#x27;output&#x27;, &#x27;myNose.gif&#x27;)
    imageio.mimsave(path_save, gif_images, fps=50)
    print(&#x27;The GIF which shows the nose orientation has been saved to:&#x27;)
    print(path_save)</code></pre><hr class="notion-hr notion-block-e20da1141d004082827ec11fcd193d3f"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-8a3c8cf024714f05b3bb65039da3aa96" data-id="8a3c8cf024714f05b3bb65039da3aa96"><span><div id="8a3c8cf024714f05b3bb65039da3aa96" class="notion-header-anchor"></div><a class="notion-hash-link" href="#8a3c8cf024714f05b3bb65039da3aa96" title="Demo"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Demo</span></span></h2><div class="notion-text notion-block-32e6585650c2476f81b9b4b103d436d2">Here we present some results of this simulation task. The running time often takes 1-2 minutes (most of the time is used by 3D visualization).</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-6d6021fdf7764436be0962c4fa54f0f5"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:480px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F57625a20-449d-4ba3-a0d2-d2c88fe3048b%2Fhead_orientation.png?table=block&amp;id=6d6021fd-f776-4436-be09-62c4fa54f0f5" alt="Figure 3: The head orientation during the moving process." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 3: The head orientation during the moving process.</b></figcaption></div></figure><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-e4a78e994bc34845a7b8d815c41318a1"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:480px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F062b9489-ea2f-4b87-a70f-e652c2b6673c%2Fmnggiflab-compressed-myNose_(1).gif?table=block&amp;id=e4a78e99-4bc3-4845-a7b8-d815c41318a1" alt="Figure 4: 3D visualization of the nose orientation." loading="lazy" decoding="async"/><figcaption class="notion-asset-caption"><b>Figure 4: 3D visualization of the nose orientation.</b></figcaption></div></figure><div class="notion-text notion-block-3c4086f2c0564dbeafaf7df20fa231e3">The complete source code can be found in my GitHub.</div></main></div>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Simulation of a Retinal/Visual Implant]]></title>
            <link>https://shuoli199909.com/article/eth-cs-ex3</link>
            <guid>https://shuoli199909.com/article/eth-cs-ex3</guid>
            <pubDate>Tue, 02 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Simulation of Sensory Systems (Exercise 3: The Visual System).]]></description>
            <content:encoded><![CDATA[<div id="container" class="mx-auto undefined"><main class="notion light-mode notion-page notion-block-a57e1fc9969349e09af7940112fe273e"><div class="notion-viewport"></div><div class="notion-collection-page-properties"></div><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-7e0ce2b52fbd464b8939fd98031f9856" data-id="7e0ce2b52fbd464b8939fd98031f9856"><span><div id="7e0ce2b52fbd464b8939fd98031f9856" class="notion-header-anchor"></div><a class="notion-hash-link" href="#7e0ce2b52fbd464b8939fd98031f9856" title="Background"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Background</span></span></h2><div class="notion-text notion-block-4fee013753214c14a3847c293c95f3fb">The idea of a “visual prothesis” is not as far fetched as it might seem at first, and a large number of research groups are working on different approaches. A recent overview can be found in <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://journals.sagepub.com/doi/epdf/10.1177/2515841418817501">Retinal Prothesis (Bloch, 2019)</a>. The exercise description part was created by <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://ee.ethz.ch/the-department/people-a-z/person-detail.NDg1ODU=.TGlzdC8zMjc5LC0xNjUwNTg5ODIw.html">Prof. Dr. Thomas Haslwanter</a>. </div><hr class="notion-hr notion-block-4f8a477e97bf4e948eddfa86f58c33b6"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-781221dee70e47108082748ac72b6beb" data-id="781221dee70e47108082748ac72b6beb"><span><div id="781221dee70e47108082748ac72b6beb" class="notion-header-anchor"></div><a class="notion-hash-link" href="#781221dee70e47108082748ac72b6beb" title="Exercise Description: Simulation of a Retinal/Visual Implant"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title"><b>Exercise Description: Simulation of a Retinal/Visual Implant</b></span></span></h2><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-ac87b0366e824c6cb59704ca42177165" data-id="ac87b0366e824c6cb59704ca42177165"><span><div id="ac87b0366e824c6cb59704ca42177165" class="notion-header-anchor"></div><a class="notion-hash-link" href="#ac87b0366e824c6cb59704ca42177165" title="Data"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Data</span></span></h3><div class="notion-text notion-block-a360291364a94cb3a720fb4ff802dd04">All the Files for this exercise are bundled in “Ex_Visual.zip”</div><div class="notion-text notion-block-a89db82431164ed79499760e53c1403d">In addition, you can use the following files:</div><ul class="notion-list notion-list-disc notion-block-03f6cbbc5f9c4a4a8827f5013e41d9f6"><li>Typical standard test images that are often used in image processing (e.g. lena, mandrill, etc.) can also be found at the <a target="_blank" rel="noopener noreferrer" class="notion-link" href="http://links.uwaterloo.ca/oldwebsite/bragzone.base.html">Waterloo BragZone</a>.</li></ul><ul class="notion-list notion-list-disc notion-block-d43185ce8e8740188855cb74e5f2c27c"><li>You can also use one of the following:</li></ul><figure class="notion-asset-wrapper notion-asset-wrapper-embed notion-block-594069bfc91f4cf4acb02399776a398a"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:336px;max-width:100%;flex-direction:column;height:254px"><iframe class="notion-asset-object-fit" src="https://embed.notion.co/api/iframe?url=https%3A%2F%2Fwww.imagehub.cc%2Fimage%2F1g9NLg&amp;key=656ac74fac4fff346b811dca7919d483" title="iframe embed" frameBorder="0" allowfullscreen="" loading="lazy" scrolling="auto"></iframe></div><figcaption class="notion-asset-caption">Figure 1: TheDoor.jpg (146 kB)</figcaption></figure><figure class="notion-asset-wrapper notion-asset-wrapper-embed notion-block-d59bd55b30584d0497b191ce29d63fa8"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:336px;max-width:100%;flex-direction:column;height:251px"><iframe class="notion-asset-object-fit" src="https://embed.notion.co/api/iframe?url=https%3A%2F%2Fwww.imagehub.cc%2Fimage%2F1g9jLk&amp;key=656ac74fac4fff346b811dca7919d483" title="iframe embed" frameBorder="0" allowfullscreen="" loading="lazy" scrolling="auto"></iframe></div><figcaption class="notion-asset-caption">Figure 2: eye.bmp (434 kB)Figure 2: eye.bmp (434 kB)</figcaption></figure><figure class="notion-asset-wrapper notion-asset-wrapper-embed notion-block-987119e5666f437db48dd33d043d7cad"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:336px;max-width:100%;flex-direction:column;height:329.1875px"><iframe class="notion-asset-object-fit" src="https://embed.notion.co/api/iframe?url=https%3A%2F%2Fwww.imagehub.cc%2Fimage%2F1gQNeq&amp;key=656ac74fac4fff346b811dca7919d483" title="iframe embed" frameBorder="0" allowfullscreen="" loading="lazy" scrolling="auto"></iframe></div><figcaption class="notion-asset-caption">Figure 3: lena.tif (769 kB)</figcaption></figure><ul class="notion-list notion-list-disc notion-block-235932a90cbf42ef967f3023bd94d21f"><li>Hans van Hateren hosts a <a target="_blank" rel="noopener noreferrer" class="notion-link" href="http://bethgelab.org/datasets/vanhateren/">website with natural images</a> that people often use for training receptive fields, etc.</li></ul><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-cba5b1cbf50045bdb891475e4bd21f23" data-id="cba5b1cbf50045bdb891475e4bd21f23"><span><div id="cba5b1cbf50045bdb891475e4bd21f23" class="notion-header-anchor"></div><a class="notion-hash-link" href="#cba5b1cbf50045bdb891475e4bd21f23" title="General Requirements"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">General Requirements</span></span></h3><div class="notion-text notion-block-a0769267237a42a3bd8fb2d975d62610">For this exercise you should design a &quot;visual prosthesis&quot;: Write a Pyhon program which</div><ol start="1" class="notion-list notion-list-numbered notion-block-d2b09cde32274bc297bf262ee2c6c5d1"><li>Takes a given input image, or - if none is provided - lets you interactively select an input image</li></ol><ol start="2" class="notion-list notion-list-numbered notion-block-2857b115e396410f9ff3b19c6ea81cda"><li>In this image, lets you interactively select a fixation point (&quot;ginput&quot;)</li></ol><ol start="3" class="notion-list notion-list-numbered notion-block-013f51ff2eba47178d4d319b2f5432f7"><li>Calculates the <em>activity in the retinal ganglion cells</em>, and shows the corresponding activity, and</li></ol><ol start="4" class="notion-list notion-list-numbered notion-block-411a21c87f1b4951ba795305158c8e42"><li>Calculates and shows the <em>activity in the primary visual cortex</em>, and</li></ol><ol start="5" class="notion-list notion-list-numbered notion-block-ff437eb5d93145cf88a1fb23d00cd3eb"><li>Save the images to an out-file.</li></ol><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-f50a3dd23592466094a92024e9f93114" data-id="f50a3dd23592466094a92024e9f93114"><span><div id="f50a3dd23592466094a92024e9f93114" class="notion-header-anchor"></div><a class="notion-hash-link" href="#f50a3dd23592466094a92024e9f93114" title="Retinal Ganglion Cells"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Retinal Ganglion Cells</span></span></h3><ul class="notion-list notion-list-disc notion-block-e71c101b60c344438fee0e60464f526c"><li>Assume that</li><ul class="notion-list notion-list-disc notion-block-e71c101b60c344438fee0e60464f526c"><li>the display has a resolution (for those 30 cm) of 1400 pixels,</li><li>and is viewed at a distance of 60 centimeter (see Figure below),</li><li>and that the radius of the eye is typically 1.25 cm.</li><div class="notion-text notion-block-21a3e44f3aea4a2dad5726ac1791c338">This lets you convert pixel location to retinal location.</div><figure class="notion-asset-wrapper notion-asset-wrapper-embed notion-block-a70f1fc770fe451eadf885738f4f704f"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:500px;max-width:100%;flex-direction:column;height:222px"><iframe class="notion-asset-object-fit" src="https://embed.notion.co/api/iframe?url=https%3A%2F%2Fwww.imagehub.cc%2Fimage%2F1g9QVI&amp;key=656ac74fac4fff346b811dca7919d483" title="iframe embed" frameBorder="0" allowfullscreen="" loading="lazy" scrolling="auto"></iframe></div></figure></ul></ul><ul class="notion-list notion-list-disc notion-block-05cac305c7c94181ab1e9ec2955b820f"><li>We know that the retinal ganglion cells respond best to a &quot;center-surround&quot; stimulus: they show the maximum response when the center is bright and the surrounding dark (&quot;center-on cells&quot;), or vice versa (&quot;center-off cells&quot;). This behavior can be simulated with a &quot;Difference of Gaussians&quot; (DOG)-filter. For this exercise, simulate only &quot;center-on&quot; responses. The figure below shows a section through the receptive field of a typical ganglion cell. The receptive field of such a cell can be simulated with a &quot;difference-of-Gaussians&quot; (DOG)-filter with the following ratio for the standard deviations of the two Gaussians:</li><ul class="notion-list notion-list-disc notion-block-05cac305c7c94181ab1e9ec2955b820f"><span role="button" tabindex="0" class="notion-equation notion-equation-block"><span></span></span><div class="notion-text notion-block-3b33e0ddd5ed4e5a94dbe3abfbe7092a">From the figure below we see that the sidelength of the receptive field should be about</div><span role="button" tabindex="0" class="notion-equation notion-equation-block"><span></span></span><div class="notion-text notion-block-fcea5d8c577442d29341c3b772e079de">so that the response can go back approximately to zero at the edges (which happens at about <span role="button" tabindex="0" class="notion-equation notion-equation-inline"><span></span></span>).</div><figure class="notion-asset-wrapper notion-asset-wrapper-embed notion-block-e24b1e8289604756ba65d3c85175f0b9"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:624px;max-width:100%;flex-direction:column;height:475px"><iframe class="notion-asset-object-fit" src="https://embed.notion.co/api/iframe?url=https%3A%2F%2Fwww.imagehub.cc%2Fimage%2F1g9qQj&amp;key=656ac74fac4fff346b811dca7919d483" title="iframe embed" frameBorder="0" allowfullscreen="" loading="lazy" scrolling="auto"></iframe></div></figure></ul></ul><ul class="notion-list notion-list-disc notion-block-254ea9e239ae47b4b74dff0ae1fafc61"><li>The receptive field size increases approximately linearly with distance from the fovea. For this exercise we simulate only magnocellular cells, the receptive field of which have a receptive field size of approximately</li><ul class="notion-list notion-list-disc notion-block-254ea9e239ae47b4b74dff0ae1fafc61"><span role="button" tabindex="0" class="notion-equation notion-equation-block"><span></span></span><div class="notion-text notion-block-ab33d86e95424b49b4b0de3d7e5fee50">The parameters are also described in the <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://en.wikibooks.org/wiki/Sensory_Systems/Computer_Models/Descriptive_Simulations_of_Visual_Information_Processing">wikibook on Sensory Systems</a>, and in the article <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://learning.oreilly.com/library/view/handbook-of-image/9780121197926/xhtml/B9780121197926500838.htm">Computational models of early human vision</a>, which contains the following image for M- and P-cells:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-1d98bac5935f4548876df28a704f085e"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:100%;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://s1.imagehub.cc/images/2023/10/22/RetinalGanglion_3.png" alt="notion image" loading="lazy" decoding="async"/></div></figure><div class="notion-text notion-block-6d19f2d2ffb84c19948df11ee4bdd46f"><b>Note:</b>Take this parameter as approximation: I have found different values in the literature, regarding &quot;size of receptive field&quot;, &quot;size of dendritic field&quot;, &quot;center size&quot;, &quot;visual acuity&quot;, etc, and their exact relation to each other.</div></ul></ul><ul class="notion-list notion-list-disc notion-block-3d329df9cd1244e7a3728a7ee1d5ecb2"><li>To implement the simulation of the retinal representation of the image, proceed as follows:</li><ul class="notion-list notion-list-disc notion-block-3d329df9cd1244e7a3728a7ee1d5ecb2"><div class="notion-text notion-block-f7ebfc0ec23842e4a9ccafa7f51563dd">From the selected fixation point, find the largest distance to one of the four corners of the image. Break this distance down into 10 intervals. Using those intervals, create 10 corresponding radial zones around the fixation point. For each zone, we want to find the corresponding filter: we can do so by taking the mean radius for each zone [in pixel]. From this we can find the corresponding eccentricity on the fovea [in mm], using the geometry from the figure above. This eccentricity leads to the size of the receptive field [in arcmin], which in turn can be converted into pixel, again using the geometry shown above. Selecting the next largest odd number create a symmetric filter with that side-length, and choose the filter coefficients such that they represent the corresponding DOG-filter.</div></ul></ul><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-741f732073c145389aa931b0496f75b3" data-id="741f732073c145389aa931b0496f75b3"><span><div id="741f732073c145389aa931b0496f75b3" class="notion-header-anchor"></div><a class="notion-hash-link" href="#741f732073c145389aa931b0496f75b3" title="Cells in V1"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Cells in V1</span></span></h3><ul class="notion-list notion-list-disc notion-block-9d788185243244afbc5ab6d592f04952"><li>Activity in V1 can be simulated by <em>Gabor filters</em> with different orientations. For this exercise, first only use Gabor filters which respond to vertical lines.</li></ul><ul class="notion-list notion-list-disc notion-block-6bb75cad19874ee997b2c593e9851912"><li>Since I have not been able to find explicit information about any dependence of receptive field size on distance from fovea, please assume a constant receptive field size.</li></ul><ul class="notion-list notion-list-disc notion-block-32dbdb09605e48cd9c449cea5cf9614e"><li>Input is the original image, not the input from the ganglion cells! This is due to the definition of &quot;receptive field&quot;.</li></ul><ul class="notion-list notion-list-disc notion-block-1c426bdf75714eeda9009c4486000cfc"><li>Find paramters for this vertical Gabor-filter that lead to sensible results, as assessed by visual inspection of the resulting image.</li></ul><ul class="notion-list notion-list-disc notion-block-94ae26676669404eb482460e346db87f"><li>When this works, repeat this for the activity of Gabor cells with a few different orientations (0 - 30 - 60 - 90 - 120 - 150 deg), to get a &quot;combined image&quot; in V1.</li></ul><hr class="notion-hr notion-block-e593a1fbc4fe4ffe8d93f49b77afd8f0"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-b692e8f59b174efba6f76ce4aeb3b4a8" data-id="b692e8f59b174efba6f76ce4aeb3b4a8"><span><div id="b692e8f59b174efba6f76ce4aeb3b4a8" class="notion-header-anchor"></div><a class="notion-hash-link" href="#b692e8f59b174efba6f76ce4aeb3b4a8" title="Tips"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Tips</span></span></h2><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-c8d284bfad32467cbc271a52e9cbb56e" data-id="c8d284bfad32467cbc271a52e9cbb56e"><span><div id="c8d284bfad32467cbc271a52e9cbb56e" class="notion-header-anchor"></div><a class="notion-hash-link" href="#c8d284bfad32467cbc271a52e9cbb56e" title="Python"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Python</span></span></h3><ul class="notion-list notion-list-disc notion-block-b0fc06d29af04152a48ad3dca4c45ac1"><li><a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://www.notion.so/Coding/Python/gabor_demo.py">gabor_demo.py</a> uses <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://www.opencv.org/">OpenCV</a> to give a nice interactive example of how the output of different cells in V1 corresponds to different features of the image.</li></ul><ul class="notion-list notion-list-disc notion-block-337e376c3c9b430dbfa1f9cd391c5bba"><li>Don&#x27;t forget to check out the <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://github.com/thomas-haslwanter/CSS_ipynb">IPYNB notebooks</a> on image processing, which should provide a good introduction to image processing with Python.</li></ul><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-163a7485b7c944f0b201179b403e3246" data-id="163a7485b7c944f0b201179b403e3246"><span><div id="163a7485b7c944f0b201179b403e3246" class="notion-header-anchor"></div><a class="notion-hash-link" href="#163a7485b7c944f0b201179b403e3246" title="Interesting Links"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Interesting Links</span></span></h3><ul class="notion-list notion-list-disc notion-block-5604f368f99542dcb199014aef492f9c"><li><a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://www.notion.so/Visual_System_Links.html">Optical illusions, the Lena Story etc.</a></li></ul><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-588d4f30fff84bd9821b650819b654fc" data-id="588d4f30fff84bd9821b650819b654fc"><span><div id="588d4f30fff84bd9821b650819b654fc" class="notion-header-anchor"></div><a class="notion-hash-link" href="#588d4f30fff84bd9821b650819b654fc" title="General comments"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">General comments</span></span></h3><ul class="notion-list notion-list-disc notion-block-a344e570cc59450383823b38671f1d23"><li>Name the main file <code class="notion-inline-code">Ex3_Visual.py</code> .</li></ul><ul class="notion-list notion-list-disc notion-block-ed061f1ad7dd41eeafa1d153b7f7a92c"><li>For submission of the exercises, please put all the required code-files that you have written, as well as the input- &amp; data-files that are required by your program, into one archive file. (&quot;zip&quot;, &quot;rar&quot;, or &quot;7z&quot;.) Only submit that one archive-file. Name the archive <code class="notion-inline-code">Ex3_[Submitters_LastNames].[zip/rar/7z]</code>.</li></ul><ul class="notion-list notion-list-disc notion-block-06b5d33388d34d0ea8ec42336d859072"><li>Please write your programs in such a way that they run, without modifications, in the folder where they are extracted to from the archive. (In other words, please write them such that I don&#x27;t have to modify them to make them run on my computer.) Provide the exact command that is required to run the program in the comment.</li></ul><ul class="notion-list notion-list-disc notion-block-f9589ade8dad4ab88f4dcdf4f5559fe2"><li>Please comment your programs properly: write a program header; use intelligible variable names; provide comments on what the program is supposed to be doing; give the date, version number, and name(s) of the programmer(s).</li></ul><ul class="notion-list notion-list-disc notion-block-f5012d4bd620474ca4903844b831a8c4"><li>To submit the file, go to &quot;Ex 3: Self-Grading&quot;.</li></ul><hr class="notion-hr notion-block-d5b739c256224707ba7f8d330917b6f2"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-9e58c405a9a746c5996a4b4f5c38e5b9" data-id="9e58c405a9a746c5996a4b4f5c38e5b9"><span><div id="9e58c405a9a746c5996a4b4f5c38e5b9" class="notion-header-anchor"></div><a class="notion-hash-link" href="#9e58c405a9a746c5996a4b4f5c38e5b9" title="Implementation"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Implementation</span></span></h2><div class="notion-text notion-block-bb9f8718747047f798d186d06730acd7">The whole procedure can be divided into two tasks: Task 1 and Task 2.</div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-247f9dae04274f25bd3c9b5dc780fe1a" data-id="247f9dae04274f25bd3c9b5dc780fe1a"><span><div id="247f9dae04274f25bd3c9b5dc780fe1a" class="notion-header-anchor"></div><a class="notion-hash-link" href="#247f9dae04274f25bd3c9b5dc780fe1a" title="Read in the data"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Read in the data</span></span></h3><div class="notion-text notion-block-6407a338c703459b9f8bd30abceb4227">Before implementing the simulations of task 1 and task 2, we need to read in the pre-defined parameters and select the input image. The process is implemented using Python GUIs.</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python"># Load the pre-defined parameters.
dir_crt = os.getcwd()
Params = utils.Params_visual(path_options=os.path.join(dir_crt, &#x27;options.yaml&#x27;))
    
# GUI.
Params = utils.gui_params(Params=Params)

# Select the input image.
layout = [[PSG.Text(&#x27;Select the original image (Browse or type in directly).&#x27;)], 
          [PSG.InputText(), PSG.FileBrowse(&#x27;Select Image&#x27;)],
          [PSG.OK(), PSG.Cancel()]]
window = PSG.Window(&#x27;Select the input&#x27;, layout=layout, keep_on_top=True)

while True:
    event, values = window.read()
    if event in (None, &#x27;OK&#x27;):
        # User hit the OK button.
        break
    elif event in (None, &#x27;Cancel&#x27;):
        # User hit the cancel button.
        sys.exit()
    print(f&#x27;Event: {event}&#x27;)
    print(str(values))
 
window.close()
img_input = plt.imread(fname=values[&#x27;Select Image&#x27;])
if len(img_input.shape) == 3:  #  Color image
    img_input = rgb2gray(img_input)
Params.size_img_ori = np.flip(np.array(img_input.shape))
img_input = cv2.resize(img_input, dsize=tuple(Params.size_img_default), interpolation=cv2.INTER_LINEAR)</code></pre><div class="notion-text notion-block-c348a3adcc894fa3b4207cdf54ff90ae">In the experiment, we used the Lena image (Figure 3) as the input.</div><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-73d85f349b4e4166816ebb849972ac0b" data-id="73d85f349b4e4166816ebb849972ac0b"><span><div id="73d85f349b4e4166816ebb849972ac0b" class="notion-header-anchor"></div><a class="notion-hash-link" href="#73d85f349b4e4166816ebb849972ac0b" title="Task 1"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 1</span></span></h3><div class="notion-text notion-block-a0783f27080140d09078ec7a2837ba52">In task 1, we simulate the activity in the retinal ganglion cells. The whole process is :</div><ol start="1" class="notion-list notion-list-numbered notion-block-1a8107b8e27048b19ce3bc8e702f27e0"><li>Calculate the largest distance from the fixation point to the four corners.</li></ol><ol start="2" class="notion-list notion-list-numbered notion-block-1e4ab71169d64e9896e17f822e39849e"><li>Divide the distance into several intervals and create circular zones.</li></ol><ol start="3" class="notion-list notion-list-numbered notion-block-0ea5d46db95742afaeeb4a9a1e34a61f"><li>Determine one DoG filter for each circular zone separately.</li></ol><ol start="4" class="notion-list notion-list-numbered notion-block-08406501cd8a405fb51200ad5a273c4e"><li>Apply the DoG filters to the input image.</li></ol><div class="notion-text notion-block-5c7f63547313411c906694b5fe34ceb5">We implemented the code using a separate function. The code is:</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python">def fun_task_1(img_input, Params):
    &quot;&quot;&quot;Task-1 in Exercise-3.
    Task-1 is about the simulation of the activity in the retinal ganglion cells.
    The whole process is:

    Parameters
    ----------
    img_input: The input image in grayscale. Numpy array.
    Params: A class containing the pre-defined parameters.

    Returns
    -------
    response_ON: Response reaction of ON ganglion cells.
    response_OFF: Response reaction of OFF ganglion cells.
    &quot;&quot;&quot;

    # Visualization of the input image.
    fig, ax = plt.subplots(dpi=200, figsize=(8, 6))
    plt.imshow(
        X=cv2.resize(src=img_input, dsize=tuple(Params.size_img_ori) ,interpolation=cv2.INTER_LINEAR),
        cmap=&#x27;gray&#x27;
        )  #  Display in grayscale.
    plt.title(label=&#x27;The original image.&#x27;)
    fig.canvas.mpl_connect(&#x27;button_press_event&#x27;, onclick)
    plt.show()

    # Return the fixation coordinates.
    dir_crt = os.getcwd()
    coords_fixation = np.load(os.path.join(dir_crt, &#x27;coords_tmp.npy&#x27;), allow_pickle=True)
    # Resize
    coords_fixation[0] = coords_fixation[0]*Params.size_img_default[0]/Params.size_img_ori[0]
    coords_fixation[1] = coords_fixation[1]*Params.size_img_default[1]/Params.size_img_ori[1]

    # Calculate the farthest distance to the corner and divide the distance into several intervals.
    coords_interval, zones = coord2interval(
        coords_fixation=coords_fixation, 
        img_input=img_input, 
        Params=Params
        )
    
    # Apply the appropriate DoG filter to each level.
    img_output = np.empty_like(img_input)
    for i_zone in range(1, Params.num_interval + 1):
        # Calculate the distance between the fixation point and the interval mid-point.
        dist_tmp = np.linalg.norm(coords_fixation - coords_interval[i_zone-1, :])
        # Apply the DoG filter.
        filter_DoG = get_DoG_filter(img=img_input, dist_pix=dist_tmp, Params=Params)
        img_tmp = signal.convolve2d(in1=img_input, in2=filter_DoG, mode=&#x27;same&#x27;)
        idx = (zones == i_zone)
        img_output[idx] = img_tmp[idx]
    
    # Simulate the ON-OFF reaction.
    # ON cells.
    response_ON = np.empty_like(img_output)
    response_ON[img_output &gt; Params.t_ganglion_ON] = 1
    response_ON[img_output &lt;= Params.t_ganglion_ON] = 0
    # OFF cells.
    response_OFF = np.empty_like(img_output)
    response_OFF[img_output &gt; Params.t_ganglion_OFF] = 0
    response_OFF[img_output &lt;= Params.t_ganglion_OFF] = 1

    # Resize to the original scale.
    response_ON = cv2.resize(src=response_ON, dsize=tuple(Params.size_img_ori), interpolation=cv2.INTER_LINEAR)
    response_OFF = cv2.resize(src=response_OFF, dsize=tuple(Params.size_img_ori), interpolation=cv2.INTER_LINEAR)

    # Visualization of the ON-OFF reaction.
    fig, ax = plt.subplots(dpi=200, figsize=(16, 6))
    plt.subplot(1, 2, 1)
    plt.imshow(X=response_ON, cmap=&#x27;gray&#x27;)  #  Display in grayscale.
    plt.title(label=&#x27;Responses of ON ganglion cells.&#x27;)
    plt.subplot(1, 2, 2)
    plt.imshow(X=response_OFF, cmap=&#x27;gray&#x27;)
    plt.title(label=&#x27;Responses of OFF ganglion cells.&#x27;)
    fig.canvas.mpl_connect(&#x27;button_press_event&#x27;, onclick)
    plt.show()

    return response_ON, response_OFF</code></pre><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-888f531aeb164d0e8154647e43821e4d" data-id="888f531aeb164d0e8154647e43821e4d"><span><div id="888f531aeb164d0e8154647e43821e4d" class="notion-header-anchor"></div><a class="notion-hash-link" href="#888f531aeb164d0e8154647e43821e4d" title="Task 2"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 2</span></span></h3><div class="notion-text notion-block-a9b43a7f69724c688b88361f01a23a94">In task 2, we simulate the activities in V1. The whole process is: </div><ol start="1" class="notion-list notion-list-numbered notion-block-3698a1f933a14ea8b9ab70e158466770"><li>Calculate the Gabor filters of different orientations.</li></ol><ol start="2" class="notion-list notion-list-numbered notion-block-9afa94adb92b40d3afcf0128b1e7c69a"><li>Apply them to the input image separately.</li></ol><ol start="3" class="notion-list notion-list-numbered notion-block-f6e032a62f1d49b4ad12b15f44b7d63a"><li>Combine the results and take the average.</li></ol><div class="notion-text notion-block-d84043d2708648028d7ba5e2fda2343b">The function of task 2:</div><pre class="notion-code"><div class="notion-code-copy"><div class="notion-code-copy-button"><svg fill="currentColor" viewBox="0 0 16 16" width="1em" version="1.1"><path fill-rule="evenodd" d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 010 1.5h-1.5a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-1.5a.75.75 0 011.5 0v1.5A1.75 1.75 0 019.25 16h-7.5A1.75 1.75 0 010 14.25v-7.5z"></path><path fill-rule="evenodd" d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0114.25 11h-7.5A1.75 1.75 0 015 9.25v-7.5zm1.75-.25a.25.25 0 00-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 00.25-.25v-7.5a.25.25 0 00-.25-.25h-7.5z"></path></svg></div></div><code class="language-python">def fun_task_2(img_input, Params):
    &quot;&quot;&quot;Task-2 in Exercise-3.
    Task-2 is about the simulation of the activities in V1.

    Parameters
    ----------
    img_input: The input image in grayscale. Numpy array.
    Params: A class containing the pre-defined parameters.

    Returns
    -------
    img_output: The output image in grayscale. Numpy array.
    &quot;&quot;&quot;

    # First, only use Gabor filters which correspond to vertical lines.
    filter_gabor_vertical = get_Gabor_filter(theta=180, Params=Params)
    img_vertical = signal.convolve2d(in1=img_input, in2=filter_gabor_vertical, mode=&#x27;same&#x27;)
    img_vertical[img_vertical&lt;0] = 0

    # Then, try different orientations for Gabor filters.
    thetas = np.arange(0, 181, 30)
    img_output = np.empty(shape=(img_input.shape[0], img_input.shape[1], len(thetas)))
    for i_theta in range(len(thetas)):
        filter_gabor_tmp = get_Gabor_filter(theta=thetas[i_theta], Params=Params)
        img_output[:, :, i_theta] = signal.convolve2d(in1=img_input, in2=filter_gabor_tmp, mode=&#x27;same&#x27;)
    img_output = np.mean(a=img_output, axis=2)
    img_output[img_output&lt;0] = 0

    # Resize to the original scale.
    img_vertical = cv2.resize(src=img_vertical, dsize=tuple(Params.size_img_ori), interpolation=cv2.INTER_LINEAR)
    img_output = cv2.resize(src=img_output, dsize=tuple(Params.size_img_ori), interpolation=cv2.INTER_LINEAR)

    # Visualization of the output image.
    fig, ax = plt.subplots(dpi=200, figsize=(16, 6))
    plt.subplot(1, 2, 1)
    plt.imshow(X=img_vertical, cmap=&#x27;gray&#x27;)  #  Display in grayscale.
    plt.title(label=&#x27;Only use the Gabor filter with vertical lines.&#x27;)
    plt.subplot(1, 2, 2)
    plt.imshow(X=img_output, cmap=&#x27;gray&#x27;)
    plt.title(label=&#x27;Use the Gabor filters in different orientations.&#x27;)
    fig.canvas.mpl_connect(&#x27;button_press_event&#x27;, onclick)
    plt.show()

    return img_output</code></pre><hr class="notion-hr notion-block-bc8fa820b09b4694aca5f7078bd1c61a"/><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-73dd0369a3a14a5391a072221804ca3c" data-id="73dd0369a3a14a5391a072221804ca3c"><span><div id="73dd0369a3a14a5391a072221804ca3c" class="notion-header-anchor"></div><a class="notion-hash-link" href="#73dd0369a3a14a5391a072221804ca3c" title="Example"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Example</span></span></h2><div class="notion-text notion-block-e423f5195cf4489ca9db83fecc8f3476">Here we present an example of the whole process of simulations. The original input image is Figure 3.</div><div class="notion-text notion-block-bb3b7f7d711340c891d81a6e28cba254">Pre-defined parameter setting:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-8db2baef4142493ebfaa2e67cb0419ee"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:576px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://s1.imagehub.cc/images/2023/10/22/Example_1.png" alt="notion image" loading="lazy" decoding="async"/></div></figure><div class="notion-text notion-block-b7efb5b5c9954e0cb51ea93073529ebc">Select image:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-34af5617f63f4e53930a1c2533b897a6"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:576px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://s1.imagehub.cc/images/2023/10/22/Example_2.png" alt="notion image" loading="lazy" decoding="async"/></div></figure><div class="notion-text notion-block-050fdcfc48a646fd903d28fb5cb073a6">The input image (grayscale):</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-bbd7ee272d6148338d4899313bef52b4"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:576px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://s1.imagehub.cc/images/2023/10/22/Example_3.png" alt="notion image" loading="lazy" decoding="async"/></div></figure><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-1d2642b63a474a78a5483aa0de728ff4" data-id="1d2642b63a474a78a5483aa0de728ff4"><span><div id="1d2642b63a474a78a5483aa0de728ff4" class="notion-header-anchor"></div><a class="notion-hash-link" href="#1d2642b63a474a78a5483aa0de728ff4" title="Task 1"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 1</span></span></h3><div class="notion-text notion-block-45e72aba15cc4577b9dcc04224c19eb8">Circular zone division: </div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-ef44b4166487471faf8fccb26bf1c56d"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:576px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://s1.imagehub.cc/images/2023/10/22/Example_4.png" alt="notion image" loading="lazy" decoding="async"/></div></figure><div class="notion-text notion-block-fbc43a8bb1824027aa8ee29f473aa2fc">Responses of ganglion cells after applying DoG filters:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-9148e368f2c5482fb3762b5d8a8ebbef"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:1056px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://s1.imagehub.cc/images/2023/10/22/Example_5.png" alt="notion image" loading="lazy" decoding="async"/></div></figure><h3 class="notion-h notion-h2 notion-h-indent-1 notion-block-2a3e8798c5544521bb8c47abe9d26e2f" data-id="2a3e8798c5544521bb8c47abe9d26e2f"><span><div id="2a3e8798c5544521bb8c47abe9d26e2f" class="notion-header-anchor"></div><a class="notion-hash-link" href="#2a3e8798c5544521bb8c47abe9d26e2f" title="Task 2"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Task 2</span></span></h3><div class="notion-text notion-block-e4b8e73bea2745c2b25786206d2fc6dd">Simulation results of V1 activities:</div><figure class="notion-asset-wrapper notion-asset-wrapper-image notion-block-847510bed377478a9233117707aa2cc0"><div style="position:relative;display:flex;justify-content:center;align-self:center;width:1056px;max-width:100%;flex-direction:column"><img style="object-fit:cover" src="https://s1.imagehub.cc/images/2023/10/22/Example_6.png" alt="notion image" loading="lazy" decoding="async"/></div></figure><div class="notion-blank notion-block-6a1b6bd390584b84a094f88140ffbe9c"> </div></main></div>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[About Me]]></title>
            <link>https://shuoli199909.com/article/aboutme</link>
            <guid>https://shuoli199909.com/article/aboutme</guid>
            <pubDate>Sat, 21 Oct 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[A brief introduction of myself.]]></description>
            <content:encoded><![CDATA[<div id="container" class="mx-auto undefined"><main class="notion light-mode notion-page notion-block-3689b49707fb44c680adbfc3c0d55495"><div class="notion-viewport"></div><div class="notion-collection-page-properties"></div><h2 class="notion-h notion-h1 notion-h-indent-0 notion-block-995a9f4df547460d97afe76c46d5efed" data-id="995a9f4df547460d97afe76c46d5efed"><span><div id="995a9f4df547460d97afe76c46d5efed" class="notion-header-anchor"></div><a class="notion-hash-link" href="#995a9f4df547460d97afe76c46d5efed" title="Shuo Li (李碩)"><svg viewBox="0 0 16 16" width="16" height="16"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a><span class="notion-h-title">Shuo Li (李碩)</span></span></h2><div class="notion-text notion-block-3cc8c046db634539a58db2ad2d80020f">Shuo Li (李碩) is a master student in <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://ethz.ch/de.html">ETH Zürich</a>, Switzerland. His major is Biomedical Engineering, <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://master-biomed.ethz.ch/education/bioimaging.html">Bioimaging</a> track. </div><div class="notion-blank notion-block-e82cce061c63458a842759eb4822ed7f"> </div><div class="notion-text notion-block-9bc8ef4122614d76bb56f1ee79e95ebb">From 2011 to 2017, Shuo Li spent 6 years to finish his middle and high school career in <a target="_blank" rel="noopener noreferrer" class="notion-link" href="http://www.bj12hs.com.cn/jituanbanxue/benbu/">Beijing No.12 High School</a>. He was awarded 3 consecutive years of <b>the Merit Student in Municipal Level</b>. And then he was awarded <b>the Merit Student of Beijing</b>, which was in <b>Provincial Level</b>. What’s more, he won <b>the first prize</b> in <b>the Beijing Mathematics Competition</b> in 2015.</div><div class="notion-blank notion-block-006d9611c40e47dca9c6da3c13060dab"> </div><div class="notion-text notion-block-720462ba31404079b904e1f6103f02b3">After finishing the high school career, Shuo Li attended the National College Entrance Examination of China in the summer of 2017. Then he chose <a target="_blank" rel="noopener noreferrer" class="notion-link" href="http://jyxy.tju.edu.cn/en/">SPIOEE</a> (the School of Precision Instruments and Opto-Electronic Engineering) of <a target="_blank" rel="noopener noreferrer" class="notion-link" href="http://www.tju.edu.cn/english/About_TJU/TJU_Facts.htm">Tianjin University</a> to start his college career. During his 4 years of college career, he selected Electrical Engineering as his major and took various courses in different fields, for example, Optical Systems, Electrical Engineering, Statistics, Mechanical Engineering and Computer Science. What’s more, he also focused on the research of computer vision and signal processing. He was awarded <b>the Merit Student of Tianjin University</b> and the corresponding <b>scholarship </b>for three consecutive years. In 2018, he served as the sports minister of the students’ union in <a target="_blank" rel="noopener noreferrer" class="notion-link" href="http://jyxy.tju.edu.cn/en/">SPIOEE</a>. By the end of his senior year of college, he was awarded the title of <b>Outstanding Graduate of Tianjin University</b>.</div><div class="notion-blank notion-block-4a5cf76790764cbd922a6a5a141745e0"> </div><div class="notion-text notion-block-7824db702cab4cf3b9cfaa7ad647a402">In 2021, Shuo Li started his master career in <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://ethz.ch/de.html">ETH Zürich</a>. His major is Biomedical Engineering. During his master career, he chose a broad scale of courses, finished the corresponding course projects and also conducted one semester project of <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://www.bing.com/search?q=neural+architecture+search&amp;cvid=7f4b8c2ee0364673b23b67e8f89f8e8e&amp;gs_lcrp=EgZjaHJvbWUqBAgBEAAyBggAEEUYOTIECAEQADIECAIQADIECAMQADIECAQQADIECAUQADIECAYQADIECAcQADIECAgQANIBCDMwMDFqMGo0qAIAsAIA&amp;FORM=ANAB01&amp;PC=CNNDDB&amp;mkt=zh-CN">Neural Architecture Search</a> (NAS) in <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://medium.com/@nathaliemariehager/an-introduction-to-neural-implicit-representations-with-use-cases-ad331ca12907">Implicit Neural Representation</a> (INR) models in the <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://vision.ee.ethz.ch/">Computer Vision Lab</a> (CVL). In addition, he also finished one research project of using rPPG technologies for heart rate estimation in the <a target="_blank" rel="noopener noreferrer" class="notion-link" href="https://bmht.ethz.ch/">Biomedical and Mobile Health Technology Lab</a> (BMHT). Currently, he will start his Master Thesis in Hong Kong. In estimation, he will finish his master degree in April, 2024.</div><div class="notion-blank notion-block-dd397ab9396b475b98b7bd54f98867ec"> </div><div class="notion-blank notion-block-94efa77825eb4832aeba5032d18db1e0"> </div></main></div>]]></content:encoded>
        </item>
    </channel>
</rss>