Skip to content

AI Policy for Churches and Ministries

May 15, 2024


Jonathan Smith is the President of MBS, Inc. and the Director of Technology at Faith Ministries in Lafayette, IN.
He is an author and frequent conference speaker.

You can reach Jonathan at and follow him on X @JonathanESmith.

Copyright © 2024 Jonathan Smith. All Rights Reserved.

Artificial Intelligence is everywhere, and like it or not, everyone is using it. That means those inside and outside of your ministry are probably already using AI. Whether that is good or bad is ultimately a leadership decision every ministry is going to have to make. While the legal system is struggling to keep up, it would be wise for churches and ministries to address the appropriate use of AI and at minimum understand who owns the information being put into and generated out of these language models.

Our goal is not to tell you if you should or should not use AI, but to provide a template that helps leaders, lead. The use of AI is a leadership decision and not one that should be made by the tech team.

Here is a template that we built to help guide leadership through the decision-making process regarding the use of AI.

The template provides 2 options. Option 1 is a total ban of any use of AI. While this protects a ministry from misuse now, it will not stand the test of time. Churches and ministries should use technology as a tool to increase Kingdom impact. Even if you are not ready for AI now, at some point in the future, it will be built into everything making a total ban impractical.

Option 2 requires proper approval for using AI. It is essential to mention that those who can approve AI use need to have some knowledge of AI. Each language model has its own terms of service that make some models more or less suitable for ministries to use. There are also many free models, but the quality and ownership of the data and answers depend on whether you pay or not. A policy that requires approval for AI use offers adaptability as the AI field keeps changing quickly.

The terms of service for each AI site will further refine acceptability. For instance, OpenAI says you own the input and output, but Bard does not. Copilot limits paid users data from being used to train the models. Others do not. As these terms are constantly changing it is important that ministries understand acceptable use is not a one-time decision.

Any authorized use of AI must also respect your copyright and intellectual property policies. We have not added those here because those parts of your employee handbook should cover any medium, including AI.

We urge all ministries to consult with a qualified legal expert before using any employment documents that may have legal consequences.


As generative AI models like OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Copilot, and others become more popular, we need to define the appropriate use of these tools for our work at [Ministry Name]. We support using new technologies to help us achieve our mission, when possible, but we also recognize the dangers and limitations of generative AI models and want to ensure they are used wisely. Our aim is to safeguard employees, clients, suppliers, parishioners and the ministry from harm.

Option 1- Prohibited

All use of generative AI models while performing work for [Ministry Name] is prohibited. Ministry email addresses, credentials or phone numbers cannot be used to create an account with these technologies, and no ministry data of any kind may be submitted (copied, typed, etc.) into these platforms.

Any violation of this policy will result in disciplinary action, up to and including termination.

Option 2 – Limited Use

Use of generative AI models will be allowed while performing work for [Ministry Name] with the approval of your [manager/director/etc.]. Ministry email addresses, credentials, or phone numbers [can/cannot] be used to create an account with these technologies. Ministry data of any kind [may/may not] be submitted (copied, typed, etc.) into these platforms.

Employees wishing to use generative AI models must inform their [manager/director/etc.] [verbally/in writing] which model will be used and how it will be used. Managers must approve or deny requests within [enter number] days.

The accuracy of the content produced by AI must be checked before using it for work purposes. If the model-generated factual information cannot be confirmed by a trustworthy source, that information cannot be used for work purposes.


Scroll To Top