Interview with Philippos Gaudis: Object Burject State Classification

Philippos’ PhD essay visual training focuses on developing a method of recognizing OBJECT Budget States without visual training data. Sources learns representations for accurate state classification, designed as Junowledge, the benefit of meaningful J Knowledge from the nyline sources and the larger language models dello.

In this interview series, we are meeting some AAAI/Sigai Doctoral Consortium participants to learn more about their research. The Doctoral Consortium provides a group of students of PhD to discuss and explain to the intersection workshop in conjunction with their research interests and career objectives and a panel of researchers. In this latest interview, we met with Philippos Gaudis, who recently completed his PhD, and found more about his research on Object Bject State Classification.

Can you start by giving us us, where are you working and briefly introducing the main theme of your research?

I am Philippos, and currently I am a post-dock researcher at the Foundation for Research and Technology-Hallas (Fourth) at Greece Crete. Especially I am also a PhD student at Crit University. I say ‘usually’ because I’m currently waiting for my graduation ceremony. I made my graduation degree here and then in two masters – one in computer science and the other in Juntal Vijayan. I have now finished my PhD, which I defended in January.

The topic of my essay concerns the problem of computer vision and, in particular, the Object Burget State Classification. Object Bject State Classification is easy to understand. You have an Object Burjet in an image or video and you want to identify the status of the Object Bject, which refers to its functional status. For example, this water bottle I just shut down is closed – it is its state (more accurate, which represents one of its functional states). In the field of computer vision, one of the most important problems is the OBJECT BUJECT AGE OR TEATING A CONTRACT: ALLOWING THE CLASS OF OBJECT BUJCT. Object Bject State Classification This is a step. With being able to identify the OBJECT Budget, it is sometimes important to identify the status of the Object Burjet because the actions you do depend on its state. For example, if a robot grabs a bottle of water, the cap was open or closed, or the bottle was full or empty. Therefore, this is not only a theoretical important problem, but also a problem with many practical effects.

What was the updated in terms of Object Burject State Classification when you started your PhD and how has your work proceeded?

When I started my PhD, many researchers were starting to focus on this problem. In the last decade, a large amount of research efforts has been invested in the closest related problem of the OBJECT Burject Classification. We can say that we now have many powerful OBJECT BUJECT Classification Systems that surpass the influence of human beings. Therefore the next logical step is to move and study the Object Burget State Classification. This problem is more challenging because it is usually more difficult to solve the standard Deep Education Algorithm. It is more difficult for the system to learn how to identify the status of the Object Bject and how to separate the position of the OBJECT Budget than to recognize the Object Budget class. For example, if you have an image of a bottle of water with a cap on and another image of a bottle of water, but even though the cap is off, it looks similar to that. Therefore, there is no difference in the two images, but this small difference results in completely different states. So these are a big challenge here.

Another big challenge is that in AI usually we need a lot of data to train our models. There was only one dataset for the Object Burject States when I started my PhD. In fact, as part of my essay, we have created a new dataset for the Object Budget State Classification. One year after our dataset was published, two other datasets were published. So, we now have about four benchmark datasets.

Large quantities I would say that the problem has not yet been solved, but there has been a lot of progress in the last few years and more and more research on this topic is concentrated. And I want to think that I have also contributed to my stock to advance this problem.

Can you talk a little about the method you have developed?

This problem has been treated in the last few years that the problems of computer vision are treated in a standard way where the DEEP Danda Education approach is used. We want to make a slight difference in our method and have chosen to use a neurosimbolic approach. Only to explain what neurosimbolic means: Pure Deep Like a Pure Deep Education Method like the standard Conventional Neural Network, the approach is completely data -based. Neural network learns statistical patterns from data, and when these patterns often help solve the task, they are not always easily interpretable. For example, when we recognize a face, we focus on features such as eyes or mouth. A DEEP Danda education system can also use the same signals, but the way it represents and uses this information is generally opaque-it does not directly link to human-conscious concepts. This lack of interpretation can be a problem, especially in safety-decisive or high-drug applications. On the contrary, symbolic approaches (called some “classical AI”) depend on clear, structural representations, such as rules and logical relationships, which are often more interpretative and dependent on human gunowledge. These two examples are complementary: symbolic systems provide interpretation and previous J Knoweltge, while Deep Office provides relief and adaptability.

The impressive approach to solving computer vision problems in the last decade was following the approach of pure DEEP Danda. Our method combines the best of both the world: the flexibility of the neural network with the structural logic of the symbolic JN Knowledge of the symbolic Junowledge presented by the Junowledge Graph. Graphs are mathematical compositions that enable us to represent relationships between different concepts structured and scalable. The graph essentially consists of two types of entity: nodes and edges. For example the train is a metro map graph shown in the RI range. In our case we used a special kind of graph: J Knowledge Graph that allows the representation of various types of Junowledge, such as Commonscence Junowledge, that is, everyday life -related Junowledge. The Junowledge Graf that we used was based on their relationships with Object Budget, States and other concepts. To build them we rely on the Rep Neline reserves of J Knowledge.

Our latest neurosimbolic method (which was introduced on WACV) combines the Deep-Suning approach with the J Knowledge Graph. Inevitably it introduces the variety of novel for the OBJECT BUJECT State Classification Work. So this novel variety is zero-shot t divers. It is a zero shot tingy in that sense that you can classify the position of the Object Burjet but you do not need any visual data to train the model. It looks magical, but it’s not. If we consider reading a book a book, for example, and they are reading a book about some animal that they have never seen before, the brain has the ability to identify the animal in an image, even if it is the first time, just reading the typical details. In the broadest sense, we did this with our zero-shot teaspoons. We have to find information in place other than images, so we are using Jnowledge Graph to take advantage of this type of additional information.

Our method is also OBJECT BUJECT-Agnostic. That means that we can classify the state without using any cue or any other clear information for the class of OBJECT BUJECT. For example, we may have some OBJECTS BUJECTS that are unknown, but we can still categorize the status of this unknown.

Therefore, for summary, this method is both zero-shot t and Object Burjet Ost Ost. This is something novel and promising that it is more suitable for real-time problems. In real life, you can withstand novel and unknown objects because we have many thousands of objects that we experience in everyday life. And also because, even if we have a well -known OBJECT body, we always have the problem not to properly define the OBJECT Budget’s class. So if we rely on an accurate taxonomy on the class of Burgets, this error can cascade in the classification of the state. So if we have an OBJECT BUJECT-ENostatic State Classifier, we avoid this risk.

How was the Doctoral Consortium experience in AAAI 2025?

It was interesting. The two keenots were really good, especially the other because it was very close to my subject. It covered the content of the robotics that was really close to the Object Burjet state, so it was really interesting to me.

There were two different sessions where we were divided into groups of eight to ten PhD students and we sat with a mentor. We were able to ask the mentor questions and meet other participants as well. The most interesting thing for me was that I found many people, many interesting PhD students, and I would say that maybe some of us could cooperate in the near future.

What was it from which you want to study AI and especially this topic?

When I started my graduation, AI was not a popular word that he is now. Now, everyone uses the word AI, but after that, Deep of the Bachelor begins at the beginning of the pattern of education. I remember we had another graduate course on AI but there were very few people. I then read a book on AI by Minds Eye of the 1980s, Douglas Hofstadter (Computer Vijay .anik) and Daniel Denet (A Philosopher). It is actually a book with all the philosophical effects of AI. After I read this book, I realized that this was the same. I was very fortunate because AI was entering more and more and more years ago to find a master and PhD in AI.

For our final question, can you tell us any interesting facts or any hobby outside your research?

I really like reading. I read books about everything – literature, philosophy, poetry. In fact, I love anything written in a book. I’m a fan of watching movies too. I also like cycling and running. These activities have helped me for my research. For example, if I need to solve, the best way to solve it is 15 km. A run or a 50km bike is going to the ride, and I can think more clearly and ideas come.

At the conclusion, I want to add one more idea. We live in a truly strange time. AI has changed our lives in a wonderful way (and I fear that the overall balance is not positive). It becomes more and more clear that A.I. It is also the responsibility of anyone who works with, in academia or industry, A.I. Due to the impact on society. I want to be optimistic and think that AI can be used to make the world a better place.

About Philippos

Philippos is a post -doctoral researcher in Gaudis Greece in the Foundation for Research and Technology – Helass (Fourth). She has a graduate and MSc in the computer vigil in the company, as well as the MSc in the BC OGN. Their PhD focused on zero-shot Tot Object Burget State Classification. Their research interests are in the introduction of Junowledge, neurosimbolic integration and zero-shot t education.

Tags GS: AAAI, AAAI Doctoral Consortium, AAAI 2025, ACM Sigai


Lucy Smith is a senior managing editor for AIHUB.

You might also enjoy

Subscribe Our Newsletter

Scroll to Top