ML in PL 2022 – what we learned during the conference? | 2061

he recent ML in PL 2022 conference (organised by ML in PL Association) was a great occasion to look at the machine learning landscape in Poland and broader. Here is more!

  1. ML in PL 2022 conference
  2. ML in PL conference – agenda
  3. Can you transfer the best software engineering practices to machine learning code?
  4. Does human-AI synergy exist?
  5. Do machines see like humans?
  6. What can you infer about society by analysing ML models bias?
  7. Is ML in PL conference worth attending?

ML at the PL 2022 conference

Since 2017, the ML in PL conference has been conducted yearly. It was first organized at the University of Warsaw’s Faculty of Mathematics, Informatics, and Mechanics, but it was moved to a virtual platform during the pandemic.

The primary goals of the conference (and the ML in PL Association in general) are as follows:

Create a robust local community of machine learning researchers, practitioners, and enthusiasts at all stages of their careers.
Encourage early research activities and support future generations of students with ML interests.
Encourage the exchange of information in ML.
Encourage commercial involvement in science.

Encourage worldwide collaboration in machine learning.
Improve public comprehension of machine learning.

The conference this year spanned three days and was jam-packed with information, networking, and entertainment.

Agenda for the ML in PL conference

It all started with a students’ day, when attendees may listen to eight student presentations or participate in NVIDIA’s seminars on deep learning mechanics.

The following were included in the main section:

9 keynote speeches,
3 discussion panels,
9There were submitted lectures.
4 sponsors’ presentations,
34 posters were shown at a poster session.

Among the many topics covered were learning with positive and unlabeled data, computer vision, probabilistic & auto ML, deep learning, reinforcement learning, NLP, science-related ML, probabilistic neural networks, and consolidated learning.

The conference included so many themes, presentations, and meetings that it was difficult to cover them all. That is why I chose four that I thought were the most motivating and fascinating. Finally, they’ve come!

Can you apply best practices in software engineering to machine learning code?

The quick answer is that you can and should. And you can even receive some great help with it! Kedro is the name given to this type of support. Kedro is a Python framework for creating modular, maintainable machine learning systems. Dominika Kampa and her colleagues from McKinsey’s QuantumBlack AI presented it.

The pipeline visualisation is one of its most powerful features. The ML code may be highly complicated, and keeping it up to date as well as presenting it to business is sometimes too much. It is much simpler to understand the complete solution if the code can be represented as a flow with explicit input, output, parameters, dependencies, and layers.as well as its pieces. I recommend seeing a demonstration to see how the visualisation works in practice.

The project template is one of the visualisation’s technological foundations. This is how you begin a project: by establishing the directory structure. After that, you add data, develop a pipeline using functions, and lastly package the product by creating documentation and preparing it for release.

The experiment tracking is another intriguing aspect. All of your tests’ findings, as well as their environment descriptions, are saved in a single location where you can conveniently go through them. All you have to do is add a few lines of code.

Is there a human-AI synergy?

Petar Velikovi, DeepMind Staff Research Scientist, University of Cambridge Affiliated Lecturer, and Associate of Clare Hall, Cambridge, delivered one of the most engaging and enthusiastic lectures. His primary research focus is geometric deep learning, namely graph representation learning. This issue has lately gained popularity in both applications and research. Graphs allow for the modeling of complicated interactions and interdependencies between things. They have several applications ranging from social science to logistics to chemistry and many more.more. They generate ground-breaking results when combined with machine learning, owing primarily to their high expressive capacity.

Petar mentioned Halicin antibiotic discovery by MIT and Google Maps estimated time of arrival optimization by DeepMind as two of the most well-known uses of Graph Neural Networks (GNN), both of which were supplied with Petar’s aid.

An intriguing topic is whether GNNs may be used in abstract fields such as pure mathematics. Petar and a group of mathematicians examined it for a long-standing open problem from Representation Theory (40 years without substantial progress!). The scientists intended to comprehend a relationship between two things, one of which might be represented as a directed graph, making GNNs perfect. The strategy they adopted allowed them to analyze and understand the results using attribution techniques. These tactics help in establishing which characteristics or structures are critical to the prediction. The group discovered two essential structures, which led to a mathematical proof.

Their study shown that AI can inspire and aid people, even in the most abstract domains, by augmenting and guiding domain search. Rather than offering an explicit response, empowering human intuition can have a far more profound influence in the end.

Do machines have the same perception as humans?

An artificial neural network is an example of a nature-inspired algorithm, such as biological neural networks. It approximates the function of neurons in a biological brain. This strategy proved to be a highly strong tool for solving a variety of issues, ranging from text comprehension to speech interpretation and picture recognition. But does the human-like representation imply that machine cognition and human cognition are concerned with the same features of an object?

Matthias Bethge, head of the Tübingen AI Center and professor of computational neuroscience and machine learning, opted to validate different inductive priors in computer vision. He explored pictures that were simple for people to recognize but challenging for convolutional neural networks (CNNs), and vice versa, with an emphasis on the mismatch between human and machine judgment limits.

Texture-based categorization was investigated as one of the inductive priors. The scientist compared original picture predictions to the prediction performance of texturized images (produced from original texture synthesis). It became out that the outcomes weren’t worsened by this modification. The method worked nicely as long as the texture remained the same as the original. As a result, he made the decision to take things a step further and create a dataset that had items that paired a texture from one class with a form from a separate class (for example, portraying the shape of a cat on an elephant’s texture).

He next compared the proportion of correctly categorized items by form to the fraction of correctly classified things by texture. It was discovered that humans focus nearly entirely on shape, whereas CNNs rely more on texture information. Because CNNs rely significantly on texture, they are more sensitive to changes in texture. As a result, we may boost machine performance by feeding a model a training set supplemented with randomised textures (also created using NNs).

By comparing machine cognition to human cognition, Matthias Bethge has demonstrated that we can get closer to the intended answer. He investigated several different techniques to making computer decision-making more human-like in his work. He consistently demonstrates the link between neuroscience and psychology.and machine learning can greatly assist the latter.

What can you learn about society by studying the bias of ML models?

One paper in particular attracted my curiosity during the poster session. Adam Zadrony from the National Centre for Nuclear Research and the University of Warsaw, and Marianna Zadrona from the Academy of Fine Arts, collaborated on the poster. The researchers looked at text-to-image algorithms that had been trained on datasets of photos and captions scraped from the Internet. They examined the results of the DALL-E micro model, which, unlike DALL-E and DALL-E 2, is more prone to picking up bias from the original datasets.

Bias can be viewed as a disadvantage of the model, but it can also be used as a study tool for a much bigger problem, which is societal misunderstandings. The visuals were created by the researchers based on health-related stimuli. They observed, for example, that the phrase ‘autistic child’ produced exclusively images of boys, as if girls did not have autism. They also searched for ‘person with depression,’ which yielded images of young adults. This prompted them to consider if we, as a society, recognize that depression may arise in the elderly.

These are only two instances, but you may find more by searching for DALL-E small results on your own.

Is it worthwhile to attend ML in PL conference?

Undoubtedly, yeah! I’d advise anyone with an interest in machine learning to attend this event. It offers a lot of motivational lectures, enables learning about cutting-edge methods, and offers a terrific opportunity for community members to interact and share ideas. This event really broadens our horizons, which is what I appreciate about it the most.

Math graduate with some experience in data analytics; employed as a machine learning engineer since 2020. She is in charge of analyzing, developing, and putting AI solutions into practice. While working on any project, Aleksandra, an MLOps fanatic, attempts to persuade the company of its significance.

Middle Eastern cultural enthusiast who moonlights as a senior gardener.

Leave a Comment