Anonymous ID: 0f480f Jan. 3, 2023, 7:28 a.m. No.18065838   🗄️.is 🔗kun   >>6122

>>18065600

"Career: language technologies agAInst the language of social discrimination -this award is funded in whole or in part under the American Rescue Plan act of 2021 (public law 117-2).

 

The exponential growth of online social platforms provides an unprecedented source of equal opportunities for accessing expert- and crowd-wisdom, for finding education, employment, and friendships. One key root cause that can deeply impede these experiences is the exposure to implicit social bias. The risk is high, since biases are pernicious and pervasive, and it has been well established that language is a primary means through which stereotypes and prejudice are communicated and perpetuated.

 

This project develops language technologies to detect and intervene in the language of social discrimination? SEXIST, racist, homophobic microaggressions, condescension, objectification, dehumanizing metaphors, and the like which Can be unconscious and unintentional, but cause prolonged personal and professional harms.

 

The program opens up new research opportunities with implications to natural language processing, machine learning, data science, and computational social science. It develops new web-scale algorithms to automatically detect implicit and disguised toxicity, as well as hate speech and abusive language online. Technologically, it develops new methods to surface and demote spurious patterns in deep-learning models, and new techniques to interpret deep-learning models, thereby opening new avenues to reliable and interpretable machine learning.

 

Successful completion of the program will pave the ground for a paradigm shift in existing ways for monitoring civility in cyberspace, shielding vulnerable populations from discrimination and aggression, and reducing the mental load of platform moderators. Therefore, this project can benefit and empower a dramatic number of individuals. Representatives of disadvantaged groups discriminated by gender, race, age, sexual orientation, ethnicity who use social media or AI technologies built upon user generated content.

 

Finally, the educational curriculum developed by this program will equip future technologists with theoretical and practical tools for building ethical AI, and will substantially promote diversity, equity and inclusion in stem education, helping to foster a new, more diverse generation of researchers entering AI.

 

The overarching goal of this career project is to develop lightly supervised, interpretable machine learning approaches GROUNDED In social psychology and causal reasoning TO Detect implicit social bias in written discourse and narrative text. More specifically, the first phase of the project develops algorithms and models for identifying and explaining gendered microaggressions in short comments on social media, first unsupervisedly, then with active learning, given limited supervision by trained annotators'''. It provides transformative solutions to making existing overparameterized black-box neural networks more robust and more interpretable.

Since microaggressions are often implicit, it also develops approaches to generate explanations to the microaggression detector's Decisions.

 

In the second phase, the project addresses the challenging task of detecting biased framing about members of the lgbtq community in narrative domains of digital media and develops data analytic tools by operationalizing, across languages, well-established social psychology theories. The expected outcomes of this five-year program include new datasets, algorithms, and models that provide people-centered text analytics, and pinpoint and explAIn potentially biased framings, across languages, data domAIns, and social contexts. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the foundation's intellectual merit and broader impacts review criteria."

 

https://www.usaspending.gov/award/ASST_NON_2142739_4900