Codecademy’s Responsible AI Principles


At Codecademy, we’re committed to making sure that all the work we do is ethically sound. It’s a responsibility we take seriously, particularly now, as AI profoundly transforms our world. There’s a lot at stake, and we want the next gen of developers and people learning to code to understand what it means to be responsible, thoughtful, and informed AI practitioners. That’s why today we’re announcing Codecademy’s principles for using, building, and improving AI systems.

Fun fact: The initial idea to codify our AI principles came out of a recent internal hackathon project. As you can imagine, AI is top of mind for folks here at Codecademy, so the project resonated. From there, a diverse group of people across Codecademy — including Engineers, Data Scientists, Instructional Designers, and more — worked together to establish these AI principles and adapt them for everyone in the Codecademy community.

Our philosophy and our actions have always been guided by our learners. How can we help them achieve their coding goals? Our approach to AI is guided by the same belief, ensuring that people can interact with AI systems in a safe and fair way. These principles guide our work here at Codecademy, so that all our learners and community members can experience AI in a safe and fair way. Everyone who interacts with an AI system should feel empowered to use these ideas as a compass to navigate whatever comes next.

Read on and click each principle to see the full details. For our learners and team members, we believe that we should pursue:

1. Safe and effective systems: We make sure our tech works as intended before we ship it to you. We use AI thoughtfully and only when it makes the product better. We test it rigorously.

Our products are designed intentionally with cross-functional input and tested rigorously. We think deeply when we implement AI and carefully consider the possible impacts of the algorithms on our learners, on our team members, and on the world — beyond immediate profit. We won’t use AI for AI’s sake; instead, we clearly define the purpose of AI in a project and seek to limit the scope of its use in delivering the intended solution.

2. Algorithmic discrimination protections: We research existing algorithms to the best of our ability and investigate our own tech stack with an eye toward addressing potential risks and eliminating bias.

We take proactive and continuous measures to protect our learners and team members from algorithmic discrimination. Whether we build our own models or use models built and trained by other companies, we are clear about the potential risks and address them through our QA process. We assess algorithmic impact, evaluate results for disparities, and mitigate potential bias and discrimination whenever possible.

Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts, specifically disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.

3. Data privacy, notice, and explanation: We are transparent when we use AI. We give everyone the opportunity to understand our policies and opt in or out of having their data used to improve our models.

We strive to improve our learners’ experience, but not at the expense of our learners’ right to control their data. We win as a team, and our learners are part of that team. We communicate clearly if we intend to use learner data to inform future models and products. Learners can choose to opt out if they would prefer not to have their data used in product development.

4. Human alternatives, consideration, and fallback: While we might collaborate with some helpful robots, there is always a team of humans ready to answer your questions and support your learning journey.

We always keep humans in the loop. We improve relentlessly, and AI is part of that improvement. But even the most helpful robots need guidance. We think deeply as humans, and build products for our learners that are guided by human support and decision-making.

5. Learner-centered experiences: Our goal is not just to “do no harm” with our tech, but to create learning tools and content that help our learners be thoughtful, informed, and responsible AI users.

When we use AI, it is to improve our learners’ experience. Additionally, we create a culture of learning around new technology. We invest in our team members by giving them the time and means to keep up with current technology, including AI.

With these principles always in mind, we’re prepared to embrace AI’s opportunities. Let us know what you think about our AI ethics principles and check out our catalog of new AI courses and case studies to start learning today. You can also read more about our approach to AI on the blog.

Leave a Reply

Your email address will not be published. Required fields are marked *