The Mastermind Behind GPT-4 and the Future of AI | Ilya Sutskever | Eye on AI #118
מבוא ללשון עברית: מערכות הכללים בשפה
Grammarly AI-NLP Club #8 - Arabic Natural Language Processing: Challenges and Solutions
This AI says it's conscious and experts are starting to agree. w Elon Musk.
The Origins of Hebrew
Steven Pinker: Linguistics as a Window to Understanding the Brain | Big Think
GPT-3 vs Human Brain
GPT-3: Language Models are Few-Shot Learners (Paper Explained)
Arabic Influence on Modern Hebrew!!
The ARABIC Language (Its Amazing History and Features)
Natural Language Processing
Consolidating and Exploring Open Textual Knowledge Prof. Ido Dagan, Bar Ilan University >> Click here
מבוא לשפה - עיבוד ממוחשב של שפה אנושית עם פרופסור עידו דגן With Spotify
Start with NLP
Recommended textbook, available online:
It also provides great little introductions to many fields of linguistics before you hop into the computational part.
NLP Tutorials Part -I from Basics to Advance
Hebrew NLP Resources
מאגרי מידע ושת"פים אפשריים
חוות דעת: שימושים בתכנים מוגנים בזכויות יוצרים לצורך למידת מכונה
spaCy · Industrial-strength Natural Language Processing in Python
Stanza – A Python NLP Package for Many Human Languages
Created by the Stanford NLP Group
Large language model (LLM)
Open LLMs List
What’s before GPT-4? A deep dive into ChatGPT
GPT-4 Training process
Like previous GPT models, the GPT-4 base model was trained to predict the next word in a document, and was trained using publicly available data (such as internet data) as well as data we’ve licensed. The data is a web-scale corpus of data including correct and incorrect solutions to math problems, weak and strong reasoning, self-contradictory and consistent statements, and representing a great variety of ideologies and ideas.
So when prompted with a question, the base model can respond in a wide variety of ways that might be far from a user’s intent. To align it with the user’s intent within guardrails, we fine-tune the model’s behavior using reinforcement learning with human feedback (RLHF).
Note that the model’s capabilities seem to come primarily from the pre-training process—RLHF does not improve exam performance (without active effort, it actually degrades it). But steering of the model comes from the post-training process—the base model requires prompt engineering to even know that it should answer the questions.
How Language-Neutral is Multilingual BERT?
AraBERT: Transformer-based Model for Arabic Language Understanding