close
close
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Privacy AI Prompts

How I Built a Daily Privacy News Report with AI (and What I Learned About Privacy Prompt Engineering)

Ian Phippen - April 13, 2026

I used to encourage my team to spend hours every week simply reading about privacy news. There are so many newsletters and LinkedIn influencers that the most relevant news can get buried quickly. Our Slack community trusts us to break through the noise and bring the most important privacy news to their attention. 

Over the past six months, we’ve been experimenting with a better way to do just that. Today, we’re finally in a place that I feel pretty confident leaving my search tabs behind. Building an AI-powered daily privacy news report helped me deliver time savings to my team and more useful information to our community, but it also taught me a lot about working with AI in general. It wasn’t a linear path to get here. 

Whether you’re interested in building your own report or you just want a peek behind the curtain at what iterating on AI projects can look like in real life, read on. We’ll talk about implications for other privacy & AI projects as we go. 

Lesson #1: Sharing is caring.

I didn’t start from scratch. One of the most practical benefits of being part of a privacy community is that people share what works, and the Privacy Roundtable #ai-labs channel is exactly that kind of place. Privacy peers exchange prompts, flag what’s working, and surface ideas they’ve found useful. After a community member shared a list of prompts they were experimenting with, a member of my team decided to adapt it for our purposes. We moved it to ChatGPT and built from there. 

If you’re looking for inspiration on other ways to use AI prompts in privacy work, start with our prompt library. Select a topic that interests you and copy and paste the prompt into your preferred enterprise AI tool. 

That part is important: while you don’t have to share any sensitive or private information for most of these prompts, I generally prefer to have the guardrails of higher privacy and security safeguards anyway. Free AI tools usually come with a privacy trade-off. Chat is snappy and adapts well to your interests, but those same great qualities can reduce the time you take to second guess the information you share. Paid licenses also offer higher quality results: for example, they are less likely to hallucinate citations. We’ll assume you’re using an enterprise license for this guide. 

Lesson #2: Customize the prompt. 

Once you’ve found a prompt that piques your interest, it’s time to customize it. However fantastic the prompt engineer was, they’re not you

My first few edits to the prompt included:

  • A limit to how many articles I was interested in reading
  • Adjustments to the format read-out that to make it skimmable
  • Priority rankings for the topics and sources I valued most

This is also a great place to provide extra context about your organization, assuming you’re on an enterprise account. If you deal in health data, specify that biometric privacy laws are of interest to you. If you sell a product for kids, ask the model to prioritize news about children’s privacy. 

Make sure your prompt only gives you as much as you need. Without those limits, your AI tool will provide an excessive amount of information to cover its bases, which can invoke information overload and lead you to jump off ship. Be specific. For example, do you want to hear about interim regulatory developments, or are you only interested in the final regulation text you need to be compliant with? Explicitly identifying what you don’t want from the AI is just as important as precisely describing what you do want. The same concept applies for other types of privacy prompt engineering. 

Lesson #3: Expect to iterate.

The first week of running your new configuration is really a calibration week. Don’t be discouraged if the output isn’t quite right. Correct it. That feedback is part of the process. 

In my case, I needed to teach ChatGPT the difference between news a passionate consumer might be interested in and news a privacy manager can actually act on. I needed to let ChatGPT know it was OK if there was no news some days. The quality of articles improved over time. 

The key habit here is treating the corrections as part of your setup, not as failures. Every adjustment you make teaches the system something, and you are building something. Just don’t expect to build it in a single session.

Lesson #4: Plan for memory loss.

Eventually, things started slipping. The AI would miss major news events. It would quietly stop following requirements I’d established weeks earlier. It would skip a week altogether. 

As it turns out, when you run a scheduled command in a single chat, this is expected. AI tools work with something called a context window, which is essentially the amount of conversation they can actively “hold in mind” at once. In a long-running chat, older instructions and corrections start to fall outside that window. 

My first instinct was to manually copy and paste my full prompt into a fresh chat whenever this happened. It worked at first, but wasn’t a good long-term solution. Since we were using ChatGPT, my next tactic was to move the data privacy news report to its own “project.” 

A regular chat is ephemeral. The context lives in that one conversation and degrades over time. A Project gives you a persistent space where you can store key instructions, preferences, and corrections that carry across multiple chat sessions. So when a chat ran out of memory and I needed to start a new one, the Project already knew the most important things about what I was trying to do.

Any time I made a correction in a chat that seemed especially useful, I would add the key point back into the Project context so it wouldn’t get lost again. Think of it as maintaining a living brief for your AI assistant.

This helped a great deal, but it still wasn’t perfect. 

Lesson #5: AI tools handle prompts differently.

At some point I decided to try moving to Claude. To do it cleanly, I asked ChatGPT to summarize everything it had learned about my preferences and workflow into a single prompt, which I then edited manually to clean up and refine. That export became the foundation for what came next.

With Claude, I had the option to turn my prompt into something called a Skill. Claude Skills are reusable instruction sets stored in a project, separate from any individual conversation. Instead of keeping a long chat running and hoping the context holds, I could start from my Skill fresh every time, with the full set of instructions intact.

The practical advantage: the memory degradation problem essentially disappears. Every execution starts from the same clean state. Any time I make an improvement to my instructions, I update the Skill and it applies to every future run.

I then configured a scheduled task to use that Skill every day and gave it access to our team’s Slack channel to post the output. The automation was live. 

There were pros and cons to this switch. Spoiler, we’re still using Claude today, so it was worth it for us. In our case, ChatGPT was generally better at following rules like not repeating topics, and Claude was generally better at picking interesting and relevant articles. 

 When it comes to your own privacy prompt engineering, here’s what I advise:

  • If strictly following the rules you set is the most important factor for you, ChatGPT will work
  • If you want the potential for more insightful responses and deeper web search, Claude can provide more value.
  • If cost efficiency is most important to you or you’re working with a tight token cap, use ChatGPT. 
  • If you’re frustrated with your results and you have another option, test out your alternative. 

Also consider what your peers use. As of time of writing, ChatGPT has slightly stronger collaboration tools, but if your coworkers all use Claude, sharing Claude skills will get you a lot closer to a standardized result than requesting colleagues use a specific Project folder. 

If you choose Claude, you’ll also choose an AI model for each request. Haiku is the most cost-efficient, but is more likely to forget aspects of detailed instructions. Sonnet and Opus provide much more layered responses and have greater capacity to follow complex instructions. 

Lesson #6: Keep an eye on operational costs.

Our privacy news agent was never as expensive as spending our own hours to compile the research would be, but after we moved to Claude it did become more costly than expected. We tried swapping to Haiku, but didn’t love the results.

When you’re navigating cost versus quality like this, one technique worth trying is asking the AI itself to analyze your prompt and identify what parts of it are most computationally expensive. 

When I tried this, Claude suggested about nine different ways I could reduce the daily cost of my prompt. I reviewed each technique and implemented just the tactics I agreed were worth the trade-off. If you’re using Cowork to analyze the prompt, Claude can even implement all requested changes on its own. 

Lesson #7: Humans must stay in the process.

I considered letting my AI tool share the news on Slack directly. I even tried it out! But here’s the thing: people don’t care too much about what anyone’s AI has to say to them except their own. 

Even if I asked Claude to include some more editorial analysis or strategic snippets when sharing the news, people just did not read and engage this way. The same article shared by a member of our team with even a goofy surface-level take was received better than Claude’s deepest analysis. Give AI the job it’s best at (in our case, scouring the internet for recent privacy news) and the human the job most worth their time (interpreting the news in ways people care about). 

The same idea will extend to any other privacy prompt you use. Information is relational. We can learn from AI, but we’re motivated by people. AI can’t replace a privacy practitioner, but AI can extend their reach and impact. When we preserve the human role in the process, we also avoid wasting hours tinkering with AI to get the perfect results at something it’s just not built for. 

Lesson #8: Hand it off.

Like you improved the prompt you started with in step #1, there comes a time where it’s important to give your new prompt a fresh set of eyes. There’s so much more I could do with our privacy news report. I’ve considered creating a tighter topic taxonomy or breaking the process into multiple separate skills to take advantage of Haiku’s token efficiency when possible. But I want to ensure the tool works well for the entire team, so the best way to prioritize its next iteration is to let someone else experiment. 

It’s not just my tool, it’s our team’s tool, and in our case it needs to ultimately serve a global community of 2,000+ privacy practitioners. With a fresh perspective, my colleague quickly realized that giving our scheduled task access to certain public RSS feeds could make it more efficient and effective at its job. Allowing more people to interact with and communicate with it in their own way will help your AI project improve more than one single person’s search for “perfect” could achieve. 

Final Thoughts

If you’re a privacy professional who has been curious about putting AI to work, simple scheduled prompts like this project are a great start. The stakes are low if the output isn’t right. And the process of teaching the AI what you want is genuinely useful practice for understanding what you can expect from these tools on more consequential work.

I recommend these resources for more insight as you continue your journey:

You don’t have to figure out the whole picture before you start. Pick one thing. Set expectations that are realistic. Expect some mess in the first week. And keep going.

Contact Us image

Let’s get started

Ready to level up your privacy program?

We're here to help.