In the series of interviews, we are meeting some AAAI/Sigai Doctoral Consortium participants to learn more about their research. In this latest interview, we listen from Mohammad Kamruzman, who are looking for bias in the larger language models in Delo. We have so far been thinking about its research during the PhD, what he is planning to investigate further, and focus on this aspect of the field.
Tell us a little about your PhD – where do you study, and what is the topic of your research?
I am currently doing a PhD in the University of South Florida in the Computer Science and Engineering Department. My research focuses on understanding and reducing partisans in large -language models dello (LLMS), especially how this bias is manifested in various socialodemographic and cultural parameters. In particular, I examine social biases such as agism, nationality prejudice, cultural standard interpretation, emotional rupees and brand bias, how different individuals assigned to LLMS influence their output and decision -making processes.
Can you give an overview of the research you have conducted so far during your PhD?
Throughout my PhD, I have invented multiple aspects of bias in LLMS. My work includes:
- Social bias: Analyzed how subtle bias (agesism, beauty, institutional and nationality bias) influences the LLM output, revealing the model’s predictions to be normalized in normal yet ignorant.
- Nationality prejudice and cultural standards: Investigating how to hand over LLMS to a nationality-specified person, how their perceptions towards different countries and Rs.
- Emotion Attribution Bias: Investigating nationality-specific emotional rupees, finding significant inequality between LLM-generated emotions and cultural standards.
- Brand Bias: Showing that LLMS consistently favored global brands on local brands, strengthening socio-economic bias and rupees.
- Techniques to reduce bias: To reduce social bias, developed a prompting strategy based on dual-processes JU OGN (System 1 and System 2 logic), gaining significant improvement in righteousness.
In addition, I introduced a binsteroset, which is designed to measure stereotypical socioeconomy in multilingual LLMS, especially for Bangla language, which addresses the underpression of non-Western languages in prejudice research.
Is there a aspect of your research that has been particularly interesting?
One of the especially interesting aspects of my research is how to assign personality to LLMS – such as gender, nationality or cultural background, their interpretation and output significantly changes. This aspect provides insights of not only inherent bias in models, but also in the insights of human beings. For example, in my work on individuals assigned by nationality, it has been shown that advanced LLMS, when assigned various national identities, perform continuous bias with alignment or reinforcement with stereotypes about nationality. This intersection between AI and Social Vision makes exceptionally attractive and effective to research.
What are your plans for building yet on your research during PhD – what aspects of the next will you examine?
Moving forward, I showed my research more in three main fields. I plan to create:
- Strength of bias reduction techniques: Dual-process JS OGN-based theory-based prompting techniques more refined and expand, apply them to emerging multimodal LLMS.
- Intersection and compound bias: Examine how multiple demographic features combine each other when combined with each other, for example, examine the combined effects of age, gender and gender in a model output.
- Cross-Cultural Generalization: Extend my bias datasets and methods to accommodate more diverse cultural references, making sure the LLM ness picture has improved globally instead of western -centered views.
What made you to study AI, and especially LLMS and the field of prejudice/righteousness?
These models are accompanied by the profound influence on the society, the motivation and righteousness to study AI and especially focus on LLM bias. As LLMS affects more and more everyday decisions-from job rental and production recommendations to cultural understanding-their natural bias enhances social inequality. Coming from a diverse cultural background myself, I recognized the initial critical requirement of AI systems. Thus, I was very interested in looking for these parties to promote the technology equally, to promote righteousness, inclusion and equity through my research.
What advice do you give someone a PhD in the field?
A.I. For anyone, especially for any Ph.D., especially in consideration of partisan and righteousness, will be my advice:
- Identify your passion clearly: Make sure you choose a topic that interests you, because this passion will sustain you during your PhD long journey.
- Become an interdisciplinary: Only yourself the computer; Do not limit it; Actively engaging with the literature of social wisdom to better understand the effects of the real world of your work.
- Network actively: Participate in conferences and workshops, present your work and get feedback. Collaborative insights of different communities will greatly enhance your research.
Can you tell us any interesting (non-AI-related) facts about you?
Apart from my research, I am an enthusiastic soccer fan and rarely missed a big match-even if he is a Champions League, World Cup or late-night Premier League collision. Watching soccer not only gives me a break from educational work, but it also combines me with different cultures and communities around the world. Global passions, like sports, can bring people together – it is a constant reminder – something that I am also striving for my research on AI righteousness.
Muhammad about
![]() | Mohammad Kamruzman Prof. Working under the supervision of Jean Louis Kim, a third year’s PhD student at the Department of Computer Science and Engineering at the University of South Florida. Their research focuses mainly on identifying and reducing prejudice in large -language models delts (LLMS), with certain interests in the subtle forms of prejudice such as agesism, nationality, spirit of spirit, and brand choices. He has introduced datasets such as Banstrioset to evaluate multilingual bias and developed a strategy to ask on the basis of the principles of LLM Ness to improve the picture. His works have been published in the top conferences including ACL, EMNLP and AAAI. |
Tags GS: AAAI, AAAI Doctoral Consortium, AAAI 2025, ACM Sigai
Lucy Smith is a senior managing editor for AIHUB.