AI POLICY
Pacific Community Ventures (PCV)
Policy Governing the Responsible and Ethical Use of Artificial Intelligence and Machine Learning
August 2025
PCV Colleagues, Industry Partners, and Supporters,
Human civilization has entered a new era, the 4th industrial revolution, of technological innovation as artificial intelligence (AI) and machine learning (ML), bringing the power and potential to streamline our business operations, generate ideas and content, and process vast information at speeds beyond human capacity.
PCV has long been a data-driven, “high tech and high-touch”, organization founded as one of the country’s first impact investing organizations. PCV is dedicated to investing in small business entrepreneurs’ passion and resilience, helping them create good quality jobs and advance economic mobility and financial wellbeing outcomes, while uplifting communities that have been historically underserved. PCV is also a federally and state-certified community development financial institution (CDFI), a field born out of the Civil Rights movement as the economic justice pillar, more than 30 years ago following a federal acknowledgment of discrimination and redlining in the formal financial industry.
We acknowledge the potential ethical issues that AI poses, and the impact, both positive and negative, that it can have on our employees, customers, and communities. We are continuing to research and understand the algorithmic bias perpetuated by AI in the last few years, particularly in sensitive industries related to ours – such as financial decisions, hiring, and criminal justice.
Therefore, we are committed to developing and implementing a comprehensive policy around the ethical use of AI and ML that aligns with our company’s values, mission, and goals We believe that decisions to leverage any AI application, tool, methodology, or system must uphold individual and community consent, data autonomy and privacy, and respect for human agency and dignity.
Therefore, in the use of AI and ML technologies, PCV strictly adheres to the following six core principles:
- We must ensure that the benefits of the AI used minimize the harm created, and it must not harm any particular group or individuals disproportionately.
- We must select or develop AI that has been trained on demographically disaggregated data to prevent discrimination or disproportionate harm on any person or group of persons by the use of that AI system. We acknowledge that algorithms trained without proportionally balanced demographics nearly always perform worse for low-income or clients of color, as well as immigrants and other underserved groups.
- The AI system must never be used to discriminate against or exclude individual customers or groups of people during the course of business. AI systems include, but are not limited to, predictive models, matching algorithms, support desk chat bots, recommendation engines, auto-generated content for marketing and communications, tools, and scripts to automate administrative tasks (e.g., like finance, loan processing, and reporting) and others.
- AI systems used at PCV must at all times prioritize and safeguard clients’ and customers’ privacy and data rights, both, to protect data from abuse or monopolization and to ensure we are abiding by loan-level privacy legislation as we are obligated as a CDFI. Clients include the small business entrepreneurs we as CDFIs invest in and their workers. Customers include other CDFIs or mission-driven organizations that PCV supports in its role as a capacity builder in data, analytics, ML, and AI. PCV commits to strict protection of customer and client data, at all times, maintaining this data within a firewalled environment separated from other program data.
- The AI system should be used to inform and support human critical analysis and decision-making and should never be unsupervised or act as an autonomous system (i.e., do not remove humans from the process or decision-making loop).
- We will not use autonomous AI to make lending decisions, or other decisions of consequence without a change to this policy, and CEO approval. Consequential decisions include but are not limited to the following:
-
- Determining who will receive and not receive a loan.
- Determining who will receive and not receive business advisory support.
- Determining who we will inform of our products, services, and new opportunities, such as participating in a study, a new pilot program, or incentives of any kind.
- If a particular AI-supported decision poses any doubt, always ensure that no group is barred from a PCV product or service without cause or without an objective rationale. Always ensure that no information is withheld from a particular group or groups, again, without an objective rationale.
Because algorithms are trained on large data sets of lending, hiring, and criminal justice decisions that employed a definition of “risk” that has historically excluded low-income and people of color, we feel we have to be particularly intentional about the use cases and guardrails we put in place to test them within PCV and our industry. We recognize that these new technologies can easily wipe out the gains we have made to date with our high-tech high-touch approach and accelerate bias in our field at a scale and speed that we may never be able to address before adverse impacts get deeply reinforced.
We recognize that what we do as a CDFI, an impact investor, a community development organization – in terms of how we deploy AI now – will set a precedent for the sector for years to come. As we increase our technology and AI capacities within PCV, we commit to sharing learnings with the CDFI and impact ecosystem, advocate for evidence-backed practices that center clients and community voices, protect against bias, curtail market drift, prevent exclusion, and ensure we seek the explicit consent of those we and our peers serve.
When collaborating with peer CDFIs and other mission driven organizations, we must share our values and the parameters that we consider to be acceptable uses of AI (detailed below). If during the course of the collaboration, we observe divergence in these values and parameters, we will exercise the right to dissolve the collaboration. For those organizations we secure alignment with, we will work to build a community of practice that can help propagate AI standards that advance human dignity for all, especially underserved populations.
More specific guidelines follow:
- Always credit AI use: Whenever AI assists in the creation of content to co-author a piece of writing or to wholly create the content, the employee must ensure that proper credit is given to the AI system or tool used. This must be done by adding a footnote or other indications in the document or publication.
- Ensure data privacy: PCV recognizes the importance of protecting sensitive customer data and confidential company information when feeding material into AI applications. Before using any AI tool, the employee must review the accompanying language on data security and obtain approval from the Chief Data Officer and CEO before initiating use (see below). PCV will ensure that appropriate security measures are in place, such as data anonymization, encryption, plug-in tools, and access controls (to mitigate third-party tools), to prevent unauthorized access or use of such information.
- Exercise caution in decision-making: PCV recognizes that AI can automate standard processes that lead to decision-making. However, we must exercise caution and ensure that human oversight is an intentional part of the process to avoid biased or discriminatory decisions. PCV and its employees commit to regularly reviewing the AI systems and tools used to ensure their accuracy, fairness, and transparency. PCV will provide training to employees on the ethical use of AI – to ensure it is leveraged to inform and support human decision-making, not displace it.
- Partner with trusted institutions: PCV’s senior leadership is working to establish partnerships with trusted institutions, such as non-profit organizations, industry associations, and research institutions, who have expertise in the proper and ethical use of AI. We will regularly seek their guidance on ethical AI practices and incorporate their recommendations into our policies and procedures. (For example, Code for America, Black Wealth Data Center, Data Equity, etc.)
- Uphold transparency: PCV and its employees must ensure that the use of AI is transparent to our stakeholders, customers, and communities. We will clearly communicate the intended use of AI systems and tools, their limitations, and any potential impacts on stakeholders. We will also provide an avenue for feedback and concerns related to AI use.
- Commit to continuous improvement: PCV will regularly review and update our policy on the ethical use of AI to reflect changing trends, best practices, and emerging risks. We will ensure that our policy aligns with industry standards, regulatory requirements, and our company’s values and mission.
- Measuring Bias: PCV will use KPIs such as Woman-identifying, BIPOC, and LMI percent to measure and track potential bias.
- Data Revamp Working Group: PCV will apply the same data governance practices to generative AI tools that it does with platforms like Downhome or Qooper. Any concerns will be surfaced in the internal Working Group to discuss with stakeholders.
- Approvals for Testing: Staff testing new AI and ML tools need to submit a request for permission to the Chief Data Officer and CEO, clarify the use case they want to test, what the intended result or value-add will be to PCV, and how they are adhering to the above guidelines. They also need to clarify this policy with any external consultants, incorporate into external MOUs, and receive a signature of confirmation. They must have explicit written approval from the CEO and CDO before any actions are taken.
- Usage of AI tools for Auto-Completion: AI tools are increasingly becoming integrated into day-to-day applications and software such as email, Microsoft Office applications, and coding platforms. Auto-completion features, or “intelligent recommendations,” now accompany most applications and are intended to create efficiency. PCV employees are permitted to utilize auto-complete functionality in Word, Excel, PowerPoint, email, and other frequently used applications but must review the auto-completed content for accuracy. Auto-completed code (for example: while writing Python or R while performing data analysis or building algorithms) is discouraged, as the auto-generated code often suggests shortcuts, does not fully comprehend mission-based use cases, and creates a series of steps whose logic can be difficult to interpret during code reviews. If code completion tools are used, the developer must check all logic, syntax, and outputs for accuracy, AND provide comments in the code where auto-complete was used. In cases where inaccurate, misleading, or “hallucinated” content is delivered – whether to internal or external stakeholders – blaming auto-completion tools will not be tolerated. All PCV employees take personal responsibility for the content they as individuals, with or without auto-completion, produce.
By adopting this policy, PCV and its employees aim to promote the ethical use of AI and to build trust among all our stakeholders. We believe that responsible and ethical AI use generate value for our customers, bring efficiencies to our workflows, and contribute to the innovation, growth, and outcomes for underserved communities.
PCV staff referenced these frameworks, articles, and experts, in the creation of this policy:
- “AI Is Fundamentally Incompatible with Civil Rights”, Dr. Vivienne Ming, Socos Labs from The Urban Institute, 2017, https://www.youtube.com/watch?v=Cm7IJiokqz8
- “This Is Not the Industrial Revolution”, Dr. Vivienne Ming, Socos Academy, 2018, https://academy.socos.org/not-the-ir/
- “We don’t need an AI manifesto — we need a constitution”, Dr. Vivienne Ming, Financial Times, https://www.ft.com/content/b16fab3e-7f19-49ab-9bbb-9bfeccbaf063
- “Technologist Vivienne Ming: AI is a human right”, Dr. Vivienne Ming, The Guardian, 2018, https://www.theguardian.com/technology/2018/dec/07/technologist-vivienne-ming-ai-inequality-silicon-valley
- “Human at the Helm: Build Trust in AI with a Human Touch”, Salesforce Office of Ethical and Humane Use, 2024, https://humanatthehelm.splashthat.com/
- “The Data Equity Framework”, We All Count, 2022, https://weallcount.com/the-data-process/
- “Responsible Use of Technology”, World Economic Forum, 2019, https://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology.pdf
- “The Fourth Industrial Revolution: what it means, how to respond”, Klaus Schwab, World Economic Forum, https://www.weforum.org/stories/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/
- “Blueprint for an AI Bill of Rights”, The White House, 2023, https://www.whitehouse.gov/ostp/ai-bill-of-rights/
- “AI Risk Management Framework”, National Institute of Standards and Technology, 2023, https://www.nist.gov/itl/ai-risk-management-framework
- “Voluntary Commitments by Microsoft to Advance Responsible AI Innovation”, Microsoft, 2023, https://blogs.microsoft.com/on-the-issues/2023/07/21/commitment-safe-secure-ai/
This policy was drafted as part of a cross-staff data governance working group within PCV over the last two years, with support from PCV Advisory Council member, Dr. Vivienne Ming – an ethical Artificial Intelligence expert based out of Socos Labs, Berkeley, CA.