mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

12K
active users

#cnns

0 posts0 participants0 posts today
Eric Jelli<p>Today I tried out <a href="https://academiccloud.social/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> <a href="https://academiccloud.social/tags/Instinct" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Instinct</span></a> <a href="https://academiccloud.social/tags/MI300a" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MI300a</span></a> for my existing Deep Learning pipeline. Good news: It worked out of the box. Bad news: For some reason it could not beat my local <a href="https://academiccloud.social/tags/Nvidia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nvidia</span></a> <a href="https://academiccloud.social/tags/1080ti" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>1080ti</span></a>... <br>After trying all sorts of <a href="https://academiccloud.social/tags/ROCM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ROCM</span></a> installation methods via prebuild wheels, <a href="https://academiccloud.social/tags/apptainer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>apptainer</span></a> images etc I tried <a href="https://academiccloud.social/tags/nanogpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nanogpt</span></a> by <span class="h-card" translate="no"><a href="https://sigmoid.social/@karpathy" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>karpathy</span></a></span> and sure enought: The gpt code ran approx 2x faster than on a <a href="https://academiccloud.social/tags/a100" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>a100</span></a> ... I hope that this is due to my programming skills. Not AMD prefering <a href="https://academiccloud.social/tags/transformers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>transformers</span></a> over <a href="https://academiccloud.social/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a> ...</p>
Bytes Europe<p>Taupō makes CNN’s 25 best places to visit in 2025 list <a href="https://www.byteseu.com/646979/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="">byteseu.com/646979/</span><span class="invisible"></span></a> #2025 #25 #518000 <a href="https://pubeurope.com/tags/advertising" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>advertising</span></a> <a href="https://pubeurope.com/tags/aotearoa" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>aotearoa</span></a> <a href="https://pubeurope.com/tags/best" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>best</span></a> <a href="https://pubeurope.com/tags/city" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>city</span></a> <a href="https://pubeurope.com/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a> <a href="https://pubeurope.com/tags/equivalent" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>equivalent</span></a> <a href="https://pubeurope.com/tags/estimates" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>estimates</span></a> <a href="https://pubeurope.com/tags/Global" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Global</span></a> <a href="https://pubeurope.com/tags/in" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>in</span></a> <a href="https://pubeurope.com/tags/list" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>list</span></a> <a href="https://pubeurope.com/tags/made" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>made</span></a> <a href="https://pubeurope.com/tags/makes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>makes</span></a> <a href="https://pubeurope.com/tags/NewZealand" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NewZealand</span></a> <a href="https://pubeurope.com/tags/placement" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>placement</span></a> <a href="https://pubeurope.com/tags/places" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>places</span></a> <a href="https://pubeurope.com/tags/taup" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>taup</span></a> <a href="https://pubeurope.com/tags/to" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>to</span></a> <a href="https://pubeurope.com/tags/tourism" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tourism</span></a> <a href="https://pubeurope.com/tags/visit" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>visit</span></a> <a href="https://pubeurope.com/tags/zealand" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>zealand</span></a></p>
Laurent Perrinet<p>🧠 Exploring secrets of human vision today at <a href="https://neuromatch.social/tags/McGill" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>McGill</span></a> University! I'll be talking about how our brains achieve efficient visual processing through foveated retinotopy - nature's brilliant solution for high-res central vision.</p><p>👉 When: Wednesday 9th of January 2025 at 12 noon.</p><p>👉 Where: CRN seminar room, Montreal General Hospital, Livingston Hall, L7-140, with hybrid option.</p><p>with Jean-Nicolas JÉRÉMIE and Emmanuel Daucé</p><p>📄 Read our findings: <a href="https://arxiv.org/abs/2402.15480" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2402.15480</span><span class="invisible"></span></a></p><p>TL;DR: Standard <a href="https://neuromatch.social/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a> naturally mimic human-like visual processing when fed images that match our retina's center-focused mapping. Could this be the key to more efficient AI vision systems?</p><p><a href="https://neuromatch.social/tags/ComputationalNeuroscience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ComputationalNeuroscience</span></a></p><p><a href="https://neuromatch.social/tags/NeuroAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuroAI</span></a></p><p><a href="https://laurentperrinet.github.io/talk/2025-01-08-brain-seminar/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">laurentperrinet.github.io/talk</span><span class="invisible">/2025-01-08-brain-seminar/</span></a></p>
Laurent Perrinet<p><a href="https://neuromatch.social/tags/ConvolutionalNeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ConvolutionalNeuralNetworks</span></a> (<a href="https://neuromatch.social/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a> in short) are immensely useful for many <a href="https://neuromatch.social/tags/imageProcessing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imageProcessing</span></a> tasks and much more... </p><p>Yet you sometimes encounter some bits of code with little explanation. Have you ever wondered about the origins of the values for image normalization in <a href="https://neuromatch.social/tags/imagenet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>imagenet</span></a> ?</p><ul><li>Mean: <code>[0.485, 0.456, 0.406]</code> (for R, G and B channels respectively)</li><li>Std: <code>[0.229, 0.224, 0.225]</code></li></ul><p>Strangest to me is the need for a three-digits precision. Here, after finding the origin of these numbers for MNIST and ImageNet, I am testing if that precision is really important : guess what, it is not (so much) !</p><p>👉 if interested in more details, check-out <a href="https://laurentperrinet.github.io/sciblog/posts/2024-12-09-normalizing-images-in-convolutional-neural-networks.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">laurentperrinet.github.io/scib</span><span class="invisible">log/posts/2024-12-09-normalizing-images-in-convolutional-neural-networks.html</span></a></p>
JMLR<p>'A Rainbow in Deep Network Black Boxes', by Florentin Guth, Brice Ménard, Gaspar Rochette, Stéphane Mallat.</p><p><a href="http://jmlr.org/papers/v25/23-1573.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/23-1573.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a> <a href="https://sigmoid.social/tags/deep" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deep</span></a> <a href="https://sigmoid.social/tags/randomness" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>randomness</span></a></p>
KamalaHarrisWhere top and down-ballot races stand on the eve of the election Polls continue to show a tight r...<br><br><a href="https://thehill.com/opinion/campaign/4966801-tight-race-harris-trump/" rel="nofollow noopener noreferrer" target="_blank">https://thehill.com/opinion/campaign/4966801-tight-race-harris-trump/</a><br><br><a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/Campaign" target="_blank">#Campaign</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/Opinion" target="_blank">#Opinion</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/CNN’s" target="_blank">#CNN’s</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/“Poll" target="_blank">#“Poll</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/of" target="_blank">#of</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/Polls”" target="_blank">#Polls”</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/democrats" target="_blank">#democrats</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/Former" target="_blank">#Former</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/President" target="_blank">#President</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/Donald" target="_blank">#Donald</a> <a rel="nofollow noopener noreferrer" class="mention hashtag" href="https://mastodon.social/tags/Trump" target="_blank">#Trump</a><br><br><a href="https://awakari.com/pub-msg.html?id=2oOABYPNIWjQtvJMEIAThCGKVxq" rel="nofollow noopener noreferrer" target="_blank">Event Attributes</a>
JMLR<p>'Sample Complexity of Neural Policy Mirror Descent for Policy Optimization on Low-Dimensional Manifolds', by Zhenghao Xu, Xiang Ji, Minshuo Chen, Mengdi Wang, Tuo Zhao.</p><p><a href="http://jmlr.org/papers/v25/24-0066.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/24-0066.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a> <a href="https://sigmoid.social/tags/cnn" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnn</span></a> <a href="https://sigmoid.social/tags/dimensional" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>dimensional</span></a></p>
JMLR<p>'A PDE-based Explanation of Extreme Numerical Sensitivities and Edge of Stability in Training Neural Networks', by Yuxin Sun, Dong Lao, Anthony Yezzi, Ganesh Sundaramoorthi.</p><p><a href="http://jmlr.org/papers/v25/23-0137.html" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/23-0137.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/sgd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>sgd</span></a> <a href="https://sigmoid.social/tags/gradient" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gradient</span></a> <a href="https://sigmoid.social/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a></p>
PKPs Powerfromspace1<p>@arasrinivas <br>Entertainment aside, getting rid of <a href="https://mstdn.social/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a> for real-world <a href="https://mstdn.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> deployment is almost impossible. Even if you went for a ViT architecture, you must process the input using local patches with shared weights for efficiency and generalization. It is even more the case for processing multiple frames at a video level, like in Tesla FSD.</p><p>Full thread <a href="https://x.com/aravsrinivas/status/1795509237550059733?s=46" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">x.com/aravsrinivas/status/1795</span><span class="invisible">509237550059733?s=46</span></a></p>
Light🐧⁂<p>I remembered playing with DeepDream computer vision program that uses CNN (Convolutional Neural Network) almost 10 years ago. How fun it was to be able to run it locally on one my machines despite being very slow. Luckily I was able to find an already trained model, because training it locally was not feasible for me.</p><p><a href="https://hachyderm.io/tags/convolutionalneuralnetwork" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>convolutionalneuralnetwork</span></a> <a href="https://hachyderm.io/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a></p>
George Constantinides<p>Please join us for this exciting public seminar from Mario Doumet (University of Toronto) who will be talking about their most recent work on <a href="https://mathstodon.xyz/tags/FPGA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>FPGA</span></a> acceleration of <a href="https://mathstodon.xyz/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a>.<br>Zoom: <a href="https://utoronto.zoom.us/j/82613229697" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">utoronto.zoom.us/j/82613229697</span><span class="invisible"></span></a> <br>October 24, 2023 at 9am EST (2pm UK time)</p>
Laurent Perrinet<p>A perspective on <a href="https://neuromatch.social/tags/chatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatGPT</span></a> (or Large Language Models <a href="https://neuromatch.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> in general): <a href="https://neuromatch.social/tags/Hype" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hype</span></a> or milestone?</p><p>[Rodney Brooks (<a href="https://spectrum.ieee.org/amp/gpt-4-calm-down-2660261157" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">spectrum.ieee.org/amp/gpt-4-ca</span><span class="invisible">lm-down-2660261157</span></a>) tells us that </p><blockquote><p>What large language models are good at is saying what an answer should <em>sound like</em>, which is different from what an answer should <em>be</em>.</p></blockquote><p>For a nice in-depth technical analysis, see this <a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/" rel="nofollow noopener noreferrer" target="_blank">blog post by Stephen Wolfram</a> (himself!) on "What is ChatGPT Doing ... and Why Does It Work? ". Worth reading -even for non-experts- in a non-trivial effort to make the whole process explainable. The different steps are:</p><ul><li><p><u><a href="https://neuromatch.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> compute probabilities for the next word.</u> To do this, they aggregate huge datasets of text so that they create a function that, given a sequence of words, computes for all possible words in the dictionary the probability that adding this new word is statistically congruent with past words. Interestingly, this probability, conditioned on what has been observed so far, falls of as a power law, just like the global probability of words in the dictionary,</p></li><li><p><u>These <a href="https://neuromatch.social/tags/probabilities" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>probabilities</span></a> are computed by a function that leans on the dataset to generate the best approximation.</u> Wolfram makes a minute description of how to do such an approximation, starting from linear regression to using non-linearities. This leads to deep learning methods and their potential for universal function approximators,</p></li><li><p><u>Crucial is how these <a href="https://neuromatch.social/tags/models" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>models</span></a> are trainable, in particular by way of <a href="https://neuromatch.social/tags/backpropagation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>backpropagation</span></a>.</u> This leads the author to describe the process, but also to point out some limitations of the trained model, especially, as you might have guessed, compared to potentially more powerful systems, like <a href="https://neuromatch.social/tags/cellularautomata" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cellularautomata</span></a> of course...</p></li><li><p><u>This now brings us to <a href="https://neuromatch.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>embeddings</span></a>, the crucial ingredient to define "words" in these <a href="https://neuromatch.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> models.</u> To relate "alligator" to "crocodile" vs. a "vending machine," this technique computes distances between words based on their relative distance in the large dataset of text corpus, so that each word is assigned an address in a high-dimensional space, with the intuition that words that are syntactically closer should be closer in the embedding space. It is highly non-trivial to understand the geometry of high-dimensional spaces - especially when we try to relate it to our physical 3D space - but this technique has proven to give excellent results, I highly recommend the <a href="https://neuromatch.social/tags/cemantix" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cemantix</span></a> puzzle to test your intuition about word embeddings: <a href="https://cemantle.certitudes.org" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">cemantle.certitudes.org</span><span class="invisible"></span></a></p></li><li><p><u>Finally, these different parts are glued together by a humongous <a href="https://neuromatch.social/tags/transformer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>transformer</span></a> network.</u> A standard <a href="https://neuromatch.social/tags/NeuralNetwork" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetwork</span></a> could perform a computation to predict the probabilities for the next word, but the results would mostly give nonsensical answers... Something more is needed to make this work. Just as traditional Convolutional Neural Networks <a href="https://neuromatch.social/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a> hardwire the fact that operations applied to an image should be applied to nearby pixels first, transformers do not operate uniformly on the sequence of words (i.e., embeddings), but weight them differently to ultimately get a better approximation. It is clear that much of the mechanism is a bunch of heuristics selected based on their performance - but we can understand the mechanism as giving different weights to different tokens - specifically based on the position of each token and its importance in the meaning of the current sentence. Based on this calculation, the sequence is reweighted so that a probability is ultimately computed. When applied to a sequence of words where words are added progressively, this creates a kind of loop in which the past sequence is constantly re-processed to update the generation.</p></li><li><p><u>Can we do more and include syntax?</u> Wolfram discusses the internals of <a href="https://neuromatch.social/tags/chatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatGPT</span></a>, and in particular how it trained iOS to "be a good bot" - and adds another possibility, which is to inject the knowledge that language is organized grammatically, and whether <a href="https://neuromatch.social/tags/transformers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>transformers</span></a> are able to learn such rules. This points to certain limitations of the architecture and the potential of using graphs as a generalization of geometric rules. The post ends with a comparison of <a href="https://neuromatch.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a>, which just aim to sound right, with rule-based models, a debate reminiscent of the older days of AI...</p></li></ul>
Crypto News<p>What are convolutional neural networks? - Convolutional neural networks (CNNs) are a class of deep neural n... - <a href="https://cointelegraph.com/explained/what-are-convolutional-neural-networks" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">cointelegraph.com/explained/wh</span><span class="invisible">at-are-convolutional-neural-networks</span></a> <a href="https://schleuss.online/tags/convolutionalneuralnetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>convolutionalneuralnetworks</span></a> <a href="https://schleuss.online/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a></p>
JMLR<p>'Topological Convolutional Layers for Deep Learning', by Ephy R. Love, Benjamin Filippenko, Vasileios Maroulas, Gunnar Carlsson.</p><p><a href="http://jmlr.org/papers/v24/21-0073.html" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v24/21-0073.ht</span><span class="invisible">ml</span></a> <br> <br><a href="https://sigmoid.social/tags/cnn" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnn</span></a> <a href="https://sigmoid.social/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a> <a href="https://sigmoid.social/tags/topological" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>topological</span></a></p>
Published papers at TMLR<p>Deformation Robust Roto-Scale-Translation Equivariant CNNs</p><p>Liyao Gao, Guang Lin, Wei Zhu</p><p><a href="https://openreview.net/forum?id=yVkpxs77cD" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=yVkpxs</span><span class="invisible">77cD</span></a></p><p><a href="https://sigmoid.social/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a> <a href="https://sigmoid.social/tags/convolutions" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>convolutions</span></a> <a href="https://sigmoid.social/tags/cnn" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnn</span></a></p>
Published papers at TMLR<p>MVSFormer: Multi-View Stereo by Learning Robust Image Features and Temperature-based Depth</p><p>Chenjie Cao, Xinlin Ren, Yanwei Fu</p><p><a href="https://openreview.net/forum?id=2VWR6JfwNo" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=2VWR6J</span><span class="invisible">fwNo</span></a></p><p><a href="https://sigmoid.social/tags/cnns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cnns</span></a> <a href="https://sigmoid.social/tags/vision" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>vision</span></a> <a href="https://sigmoid.social/tags/attention" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>attention</span></a></p>
Jane Adams<p>Omg omg omg there's a new <a href="https://vis.social/tags/3blue1brown" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>3blue1brown</span></a> video and it's about convolutions and it is beautiful ✨✨✨ <a href="https://youtu.be/KuXjwB4LzSA" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/KuXjwB4LzSA</span><span class="invisible"></span></a></p><p><a href="https://vis.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://vis.social/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ExplainableAI</span></a> <a href="https://vis.social/tags/Education" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Education</span></a> <a href="https://vis.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://vis.social/tags/XAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>XAI</span></a> <a href="https://vis.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://vis.social/tags/NeuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NeuralNetworks</span></a> <a href="https://vis.social/tags/CNNs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNNs</span></a> <a href="https://vis.social/tags/ComputerScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ComputerScience</span></a> <a href="https://vis.social/tags/learning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>learning</span></a></p>