aiGrunn Podcast
aiGrunn Podcast
Episode 5 - Interview with Professor Lambert Schomaker
Loading
/

Prof. Lambert Schomaker’s LinkedIn and website:

https://www.linkedin.com/in/lambertschomaker/

https://www.ai.rug.nl/~lambert/

Music: Under K for King – Legendary

aiGrunn is on November 15th, 2024 in Forum Groningen. Get your tickets now at: https://aigrunn.org/ 

PyGrunn, aiGrunn’s sister conference, is on May 17th, 2024, also in Forum Groningen. Get your tickets at https://pygrunn.org

AI Generated Summary:

– The podcast opens with a hypothetical choice between two datasets, the Dead Sea Scrolls and the Kabinet van de Koningin (Cabinet of the Queen), leading to a discussion about the importance of having a sufficiently large dataset to make meaningful advancements in deep learning and AI.

– Professor Lambert Schomaker emphasizes that both discussed historical datasets are too small for significant deep learning applications, highlighting a key lesson from the big data revolution that even simplistic AI methods improve drastically with larger datasets.

– Schomaker recounts his experiences with the Cabinet of the Queen dataset, an administrative dataset from 1903, which was used as an early experiment for AI applied to handwritten manuscript scans. This dataset served as an ideal starting point for their AI research due to its uniform handwriting and historical significance.

– The discussion moves to the challenges faced when using deep learning for tasks with limited data, such as dating manuscripts from the Dead Sea Scrolls. The podcast stresses the importance of traditional statistical methods and image processing techniques when large datasets are not available, a situation often encountered in real-world applications.

– The conversation also touches upon the discrepancy between academic research and industry expectations for AI application, noting that businesses often lack sufficient, quality data to harness the power of deep learning effectively. This leads to a broader discussion on the necessity of building adaptable AI systems that can continuously learn and evolve with new or changing datasets.
The most exciting fact from the episode is the conversation around the potential of AI systems to understand and process multimodal data – incorporating not just text but also visual, auditory, and sensorimotor information – reflecting a more human-like experience of the world and potentially leading to systems that better understand our physical environment.