It’s been more than a month since the news about the human-like model to generate text (GPT-2) hit the media., If you haven’t paid much attention to this topic here is a short summary with useful reference materials.
The third edition of fast.ai’s course: Deep Learning for Coders part 1 is now publicly released, and I can assure you it is a must watch for anyone that is interested in AI.
To answer this question, let's take an example. We all learned to recognize colours by being introduced to the objects in those colours first. Strawberry, tomato, pepper and fire truck were probably enough to understand the concept of redness. Once we catch what the red colour is we can correctly identify it on flowers, cars, abstract paintings and also on many real-world objects that we’ve never come across before. This is transfer learning.
We had a pleasure to win the first prize in Poleval 2018 for language modeling task. This success has largely resulted from the adaptation we did to ULMFiT architecture by Jeremy Howard and Sebastian Ruder. Below you can find a short presentation pointing the recent changes to the Language Modeling, especially the crucial improvement of polish language model and n-waves contribution to this:)
Is it possible for a computer to understand text or speech as humans do? And then translate it into another language?