We are looking for a software engineer excited about the potential of AI and LLM technology, but also concerned about disinformation and bad actors. This role is to help a small team prepare a proof of concept demo of combining open source LLMs with credibility ratings on sources. This could result in a personalized or team-specific chatbot with specific context anchors, or one that can help with combating disinformation and accept feedback from human moderators.
This will be a fun and promising way to engage deeply with moder AI tech. We will be collaborating with the authors of recent papers such as https://people.csail.mit.edu/farnazj/pdfs/Leveraging_Structured_Trusted_Peer_Assessments_CSCW_22.pdf
We will be seeking funding from several sources over the next 12 weeks. If we are succesful in receiving funding then some funds will be available, and those who worked on the project would be prioritized in making the decisions of how to allocate them.
We are looking for a software engineer excited about the potential of AI and LLM technology, but also concerned about disinformation and bad actors. This role is to help a small team prepare a proof of concept demo of combining open source…