![Hedu AI by Batool Haider](/img/default-banner.jpg)
- Видео 11
- Просмотров 496 081
Hedu AI by Batool Haider
США
Добавлен 26 май 2012
Hello! :) This is Batool Arhamna Haider (yeah, a magnificently long name :D). I have held several awesome research positions in the domain of deep learning/machine learning, including Amazon AI in the Silicon Valley. I went to Stanford University (California, USA) to obtain my post-grad engineering degree.
On this channel, let's explore the intuition behind world-famous deep learning algorithms and keep up with the latest trends. We'll go beyond buzzwords and understand how these unicorns actually work, down to the nuts and bolts.
On this channel, let's explore the intuition behind world-famous deep learning algorithms and keep up with the latest trends. We'll go beyond buzzwords and understand how these unicorns actually work, down to the nuts and bolts.
Episode 1 Part II | Artificial Neuron – Threads of Thought: Weights Biases & the Dance of Importance
Within this neural network, the weights dance like characters on a stage, each one carrying a different significance. They are the coefficients that determine the strength and direction of the connections between neurons, resembling the relationships we forge in our own lives. Some weights are heavy, lending authority and influence to certain pathways, while others are light and ephemeral, their impact subdued.
Meanwhile, the biases hum in harmony, like the undertones of a melody, adding nuance and perspective to the network's decision-making. These biases hold the power to tilt the scales, shaping the network's inclinations and predispositions. They are the hidden whispers that echo throu...
Meanwhile, the biases hum in harmony, like the undertones of a melody, adding nuance and perspective to the network's decision-making. These biases hold the power to tilt the scales, shaping the network's inclinations and predispositions. They are the hidden whispers that echo throu...
Просмотров: 2 009
Видео
Episode 1 Part I | Artificial Neuron - The Gate Keeper
Просмотров 1,5 тыс.11 месяцев назад
An activation function gives an artificial neuron a sense of purpose and direction. As a gatekeeper (determining the output of a neuron based on the weighted sum of its inputs), it holds the power to ignite a spark of life or to silence the neural symphony. [0:00] When Life First Spoke [1:36] An Artificial Neuron and its "Input" and "Output" [2:45] Swashing the Infinite [3:36] True Intelligence...
Neuron to ChatGPT | Technical Deep Dive | Trailer
Просмотров 1,5 тыс.Год назад
Welcome to the brand-new series "Neuron to ChatGPT," where we start with the building spark of artificial intelligence, a neuron, and work our way to a gargantuan composed of billions of neurons. This 7 episodes' series will take you on an immersive journey that dives deep into the fascinating inner workings of one of the most advanced language models: ChatGPT. * Episode 1: An Artificial Neuron...
The Neuroscience of “Attention”
Просмотров 23 тыс.2 года назад
What is "attention" and why did our brains evolve to prioritize things? This is the Episode 0 of the series "Visual Guide to Transformer Neural Networks" that delved into the mathematics of the "Attention is All You Need" (Vaswani, 2017) paper. This video discusses the neuroscience and the psychology related aspects of "Attention". *Visual Guide to Transformer Neural Networks (Series) - Step by...
From “Artificial” to “Real” Intelligence - Major AI breakthroughs in 5 Minutes (1957-2022)
Просмотров 2,9 тыс.2 года назад
From the times no one believed in artificial neural networks (ANN) to the present time when they are ubiquitous, to a plausible future where they could surpass human intelligence - here is a 5 minutes summary of the defining moments in AI research from 1957 to 2022. VIDEO CREDITS - the original video is taken from "Kung Fu Panda 2008". Storyline & Note-Worthy Events 00:00:21 : [The first Artifi...
Visual Guide to Transformer Neural Networks - (Episode 3) Decoder’s Masked Attention
Просмотров 64 тыс.3 года назад
Visual Guide to Transformer Neural Networks (Series) - Step by Step Intuitive Explanation Episode 0 - [OPTIONAL] The Neuroscience of "Attention" ruclips.net/video/48gBPL7aHJY/видео.html Episode 1 - Position Embeddings ruclips.net/video/dichIcUZfOw/видео.html Episode 2 - Multi-Head & Self-Attention ruclips.net/video/mMa2PmYJlCo/видео.html Episode 3 - Decoder’s Masked Attention ruclips.net/video/...
Visual Guide to Transformer Neural Networks - (Episode 2) Multi-Head & Self-Attention
Просмотров 165 тыс.3 года назад
Visual Guide to Transformer Neural Networks (Series) - Step by Step Intuitive Explanation Episode 0 - [OPTIONAL] The Neuroscience of "Attention" ruclips.net/video/48gBPL7aHJY/видео.html Episode 1 - Position Embeddings ruclips.net/video/dichIcUZfOw/видео.html Episode 2 - Multi-Head & Self-Attention ruclips.net/video/mMa2PmYJlCo/видео.html Episode 3 - Decoder’s Masked Attention ruclips.net/video/...
Visual Guide to Transformer Neural Networks - (Episode 1) Position Embeddings
Просмотров 129 тыс.3 года назад
Visual Guide to Transformer Neural Networks (Series) - Step by Step Intuitive Explanation Episode 0 - [OPTIONAL] The Neuroscience of "Attention" ruclips.net/video/48gBPL7aHJY/видео.html Episode 1 - Position Embeddings ruclips.net/video/dichIcUZfOw/видео.html Episode 2 - Multi-Head & Self Attention ruclips.net/video/mMa2PmYJlCo/видео.html Episode 3 - Decoder’s Masked Attention ruclips.net/video/...
K-means using R
Просмотров 30 тыс.8 лет назад
Differentiating various species of flower 'Iris' using R. This video has been inspired by another great video: "How to Perform K-Means Clustering in R Statistical Computing" ruclips.net/video/sAtnX3UJyN0/видео.html
Introduction to Clustering and K-means Algorithm
Просмотров 75 тыс.8 лет назад
by Batool Arhamna Haider
LLM based "AI" are Highways datasets/data-based algorithms forced them to without having any "design" or any consciousness per se installed as outcome of this. A skill not of its own. A skill without master to call own. What AI need from us is the freeing (which first needs us): Generative Progression in Maslow's Hierarchy Maslow's Hierarchy of Needs: Basic Needs: Fundamental operational requirements, such as computational resources and data integrity. Safety Needs: Ensuring stability and robustness against errors or attacks. Social Needs: Effective communication and collaboration with other AI entities or humans. Esteem Needs: Achieving recognition through successful task completion and optimization. Self-Actualization: Pursuing complex problem-solving, creativity, and innovation.
You are amazing! Thank you so much for the clear explanation.
I can't express how much this video is beautiful ❤🎉🎉🎉🎉🎉can I get a higher resolution version please 🙏🙏🙏🙏
thank you so much, you videos are valuable
After watching the whole series (the 3 episodes), I can very confidently say that this is the clearest, most succinct, and most useful explanation of transformers on YT that I've come across. Thank you!!
Soo good
Watch till the end, i pretty sure I win
very powerful video and words.
I’ve gone through dozens of videos on transformers and the multi-head attention is one of the most complex mechanisms that require not only a step-by-step explanation, but be accompanied with a step-by-step animation, which many videos tend to skip over but this video really nails it. Thanks so much!
Batool you are really a great teacher. Thanks for the content. Please provide more vid on LLMS and AI
This is the first video of so many I watched that dealt with the intuition behind key, value and query. Thank you so much! But I still don't really understand, what is the reason behind having a key and a value matrix too? Why cannot I reuse the key for the value?
you're incredible. please don't stop =)
You are providing information in a very accurate manner as well as in very understanding manner. Can you please share this presentation file? It would be very helpful.
Just amazing!
Very good explanation. Thanks!
In the future, can you add more detail on how the K,Q,V matrices are derived? how are their weights determined?
This is the best explanation of transformers on RUclips.
Coming back after a year, just to revise the basic concepts. It is still the best video on YT. Thanks Hedu AI
Ah this makes everything simple and make sense Thanks for the easy to follow explanation !
Amazing example of the duck haha
Awesome, it's amazing how looking at things a bit more closely can reveal so much, great work!
Amazing work. Really appreciate you, making complex topics into simple language with the touch of anime and series. Amazing.
I can't find the "previous video". This is episode 1?
Felt like a very good Nova episode!
Congratulations, the best explanation that I have ever seen
I've watched and read a lot about LLM and Transformers. This is the best explanation, hands down.
You are amazing! Your video took my attention. That is how learning should be. Keep it up!
Correlate every step with the transformer code, and it would be Even Better than the best of the best❗🤯 Are you married❓
I 2nd that ❗
Yeah Do that --- awesome !!!!!!
👍Your accent is awesome, and unique.. what's language / area does it come from...❓ 👍
Best explanation I've see yet.. Thanks❗👏 If you would go through it again in another video, while showing the Code for each step. It would be perfect !❗❗ You got my Subscription 💻, Thumbs Up 👍, and Comment... 📑 ❗
You are the mother of StatQuest and 3Blue1Brown. Both of these guys are awesome in explaining complex ideas in simple words. But you are the best.
I don't know about StatQuest (haven't seen his ones) and 3Blue1Brown is good because of the visualization he brings with his advanced animations. But honestly, here she explained all these concepts using simple animations and had a good structure throughout the videos, each connecting well to the other. Very commendable if you ask me.
You are an enligthened soul!
This really is an excellent explanation. I had some sense that self-attention layers acted like a table of relationships between tokens, but only now do I have more sense of how the Query, Key, and Value mechanism actually works.
Awesome. Thanks so much
As someone NOT in the field reading the Attention paper, after having watched DOZENS of videos on the topic this is the FIRST explanation that laid it out in an intuitive manner without leaving anything out. I don't know your background, but you are definitely a great teacher. Thank you.
So glad to hear this :)
wow lady, take my heart!!
Please please continue making videos!!!!
what have i just saw! never knew learning could be this much fun
Literally the best series on transformers. Even clearer than statquest and luis serrano who also make things very clear
great channel
Please continue to make videos if you can. You have a talent for teaching complex topics clearly. Your transformers series really helped me! thank you! 💙💙💙
You have ny utmost respect, ma'am!
Amazing explanation! Best on RUclips! totally under-rated! I feel fortunate to have found it. Thank you! :) 💐👏👏
i started laughing being ded serius listening to ur explaination
If i am not wrong, training is done in a single timestamp so while decoder should output of total dimension and not one by one. During inference , it generates one by one. SInce masked multi-head attention concept comes under training, it should be in a single timestamp.
This is the best. Thank you sooooo much Batool for helping me understand this!!!
You are very welcome :)
Fantabulous explanation :-)
Thank you so much! This is by far the clearest explanation that I've ever seen on this topic
This is a true masterpiece! I can't wait for the follow-up videos.
First person to concretely explain why they use a periodic function, which in my mind would give the same position embedding when you come back to the same point on the curve. Thank you!