About this role
Responsibilities About the team Seed Global Data is a team focused on producing international data for LLMs. For the training of large models, data is the lifeline of model quality — and the Global Data team is working closely with technical, product, and operations teams to ensure effective data production strategies and execution management. As a key member of our Global Data Team, the project intern will play a pivotal role in managing the intricate processes involved in training large language models (LLMs) with diverse datasets. This role focuses on overseeing and improving operational workflows, primarily for safety-related projects, ensuring they are delivered with high quality and efficiency. As a project intern, you will have the opportunity to engage in impactful short-term projects that provide you with a glimpse of professional real-world experience. You will gain practical skills through on-the-job learning in a fast-paced work environment and develop a deeper understanding of your career interests. Research and participation in model safety workflows to identify limitations in and support the improvement of model safety and existing evaluation paradigms. This may involve: - Conducting academic and industry research on the AI safety landscape. - Performing systematic human or automated evaluations of model outputs for safety, bias, and harmful content. - Scoring, categorizing, and comparing different models' responses using established safety rubrics. - Evaluating existing evaluation frameworks and surfacing feedback on areas of improvement. - Preparing documentation/summaries on evaluation and research findings with detailed assessment reports for stakeholders. Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting: - Hate speech or harassment - Self-harm or suicide-related content - Violence or cruelty - Child safety Support resources and resilience training will be provided to support employee well-being. Qualifications Minimum Qualifications - Currently pursuing Bachelor's or Master's degree in AI policy, Computer Science, Engineering, Journalism, International Relations, Law, Regional Studies, or a related discipline. - Basic understanding of prompt engineering and large models, interest and familiarity with AI safety concepts (bias, toxicity, harmful content, etc.) - Strong analytical skills, with the ability to interpret both qualitative and quantitative data, and exceptional communication skills in English to translate them into clear insights. - Creative problem-solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs. Preferred Qualifications: - Experience in AI safety, Trust & Safety, risk consulting, or risk management is highly desirable. - A growth mindset, with a genuine receptiveness and enthusiasm for continuous learning. Readiness to actively solicit and apply constructive feedback. Intellectually curious, self-motivated, detail-oriented, and team-oriented. By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy If you have any questions, please reach out to us at apac-earlycareers@bytedance.com
Required
Also in Consulting
NAT AND NAT SEARCH PTE. LTD.
THE SUPREME HR ADVISORY PTE. LTD.
THE SUPREME HR ADVISORY PTE. LTD.