BACK TO ALL POSTS

Announcing the DataHub Blog Community Contributor Program!

Open Source

Community

Blog

DataHub

Data

Elizabeth Cohen

Aug 15, 2022

Open Source

Community

Blog

DataHub

Data

Community Contributor Program

We are excited to announce the launch of our DataHub Blog Community Contributor Program! Contribute your perspective to our rich community of current and aspiring data practitioners, developers, and leaders. We are looking for Community Members to submit posts about:

  • Evaluating and deploying DataHub — help others understand your adoption journey 🧐
  • Modern approaches to Data Governance, Metadata Management, Data Mesh, and beyond 🚀
  • DataHub technical deep dives and tutorials 💻
  • Extending and customizing DataHub 💡

We know your time and effort are valuable; as a token of appreciation, you will receive $100 USD per published post.

Why become a contributor to the DataHub blog?

Inspire others and spark meaningful conversations. The DataHub Community is a one-of-a-kind group of data practitioners who are passionate about building enabling data discovery, data observability, and federated data governance. We all have so much to learn from one another as we collectively address modern metadata management and data governance; by sharing your perspective and lived experiences, we can create a living repository of lessons learned to propel our Community toward success.

Reach a broader audience. The DataHub Blog is distributed to thousands of Community members via our newsletter, Slack, and more. Through our contributing program, you will have the opportunity to share your work with our rapidly growing global community of data practitioners and the broader Medium community.

Here are a few things we do to ensure your articles reach the largest audience possible:

  • We have a custom domain that can help drive more traffic to your article.
  • We feature our best content on our publication’s pages and social media (Linkedin, Twitter, Website).
  • We send newsletters that feature stories and writers.

Getting Started — Write for the DataHub Blog

If you have an idea for a post you would like to write, reach out to @Elizabeth Cohen on our DataHub Slack community. You will be added to our #contribute-datahub-blog channel where we will collaborate with you to prepare your post for publication.

Here is the process:

  1. Create a shared Google doc with your content and ask for a review in our #contribute-datahub-blog channel. The core DataHub Team will review your post within one week and request changes. We will give critical feedback on your post — our goal is for you to publish something of super high quality that will be a long-term reference for our community. All feedback will be constructive. We can give feedback early in the writing process if you want more structural feedback, or later if you prefer to share a more finished product.
  2. Once you have made all of the requested changes, we will determine a publish date.
  3. We will invite you to Medium as a contributor, where you will create and submit your content to the DataHub Blog.

If you want to workshop ideas, we have a channel for that! Send a note on Slack in #contribute-datahub-blog. If you have other questions or concerns, please reach out to Elizabeth on Slack.

Submission Guidelines

Before submitting your article, there are a few essential things you need to know. Make sure you read and understand these well, as submitting an article to the DataHub Blog, you agree to comply with all of them.

  1. Medium’s Rules and Terms of Service apply to the DataHub Blog as it is a Medium publication.
  2. As explained in Medium’s Terms of Service, you own the rights to the content you create and post on Medium and therefore, DataHub Blog. You’re also responsible for the content you post. This means you assume all risks related to it, including someone else’s reliance on its accuracy, or claims relating to intellectual property or other legal rights.
  3. We have adopted Medium’s Curation Guidelines for every article we publish. This means that if your post isn’t of a high enough quality to be curated or doesn’t follow the guidelines, we won’t publish it on the DataHub blog.
  4. Please limit your submissions to one a day, additional posts will not be considered. You’re welcome to resubmit them in the future.
  5. You can make minor edits to a published article as long as they respect our rules and guidelines.
  6. We might directly edit your content to correct basic spelling mistakes and update minimal formatting. Also, we might remove images where the source isn’t clearly stated. Copyright violation is a real thing and could happen to you. It is your responsibility to ensure you own or have a legal right to use all content, images, and videos you include in your articles.
  7. We can remove any articles you post on the DataHub blog for any reason. If we do so, your content will not be lost but still be hosted on Medium.com and redirected there.

Biography/Byline

Authors of Datahub Blog posts will receive an author byline. A brief biography will appear at the end of the article. The author's information should be no more than three sentences and should describe the credentials of the author, the title and company with the city, state, and country, and give the reader contact information (if you want to share and keep the dialogue open!). We encourage you to provide a headshot as well.

Ready to get started? Join the #contribute-datahub-blog channel! ✨

Open Source

Community

Blog

DataHub

Data

NEXT UP

Governing the Kafka Firehose

Kafka’s schema registry and data portal are great, but without a way to actually enforce schema standards across all your upstream apps and services, data breakages are still going to happen. Just as important, without insight into who or what depends on this data, you can’t contain the damage. And, as data teams know, Kafka data breakages almost always cascade far and wide downstream—wrecking not just data pipelines, and not just business-critical products and services, but also any reports, dashboards, or operational analytics that depend on upstream Kafka data.

When Data Quality Fires Break Out, You're Always First to Know with Acryl Observe

Acryl Observe is a complete observability solution offered by Acryl Cloud. It helps you detect data quality issues as soon as they happen so you can address them proactively, rather than waiting for them to impact your business’ operations and services. And it integrates seamlessly with all data warehouses—including Snowflake, BigQuery, Redshift, and Databricks. But Acryl Observe is more than just detection. When data breakages do inevitably occur, it gives you everything you need to assess impact, debug, and resolve them fast; notifying all the right people with real-time status updates along the way.

John Joyce

2024-04-23

Five Signs You Need a Unified Data Observability Solution

A data observability tool is like loss-prevention for your data ecosystem, equipping you with the tools you need to proactively identify and extinguish data quality fires before they can erupt into towering infernos. Damage control is key, because upstream failures almost always have cascading downstream effects—breaking KPIs, reports, and dashboards, along with the business products and services these support and enable. When data quality fires become routine, trust is eroded. Stakeholders no longer trust their reports, dashboards, and analytics, jeopardizing the data-driven culture you’ve worked so hard to nurture

John Joyce

2024-04-17

TermsPrivacySecurity
© 2025 Acryl Data