Skip to content

Commit 568e0eb

Browse files
committed
Minor edits
1 parent 9e5f5e0 commit 568e0eb

1 file changed

Lines changed: 49 additions & 53 deletions

File tree

notebooks/v2/analyzing_obesity_prevalence.ipynb

Lines changed: 49 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,5 @@
11
{
22
"cells": [
3-
{
4-
"cell_type": "markdown",
5-
"metadata": {
6-
"colab_type": "text",
7-
"id": "view-in-github"
8-
},
9-
"source": [
10-
"<a href=\"https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/analyzing_obesity_prevalence.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
11-
]
12-
},
13-
{
14-
"cell_type": "markdown",
15-
"metadata": {
16-
"id": "srAnaUPPbrH6"
17-
},
18-
"source": [
19-
"Copyright 2025 Google LLC.\n",
20-
"SPDX-License-Identifier: Apache-2.0\n",
21-
"\n",
22-
"**Notebook Version** - 2.0.0"
23-
]
24-
},
253
{
264
"cell_type": "markdown",
275
"metadata": {
@@ -32,7 +10,7 @@
3210
"\n",
3311
"**Objective:** This notebook demonstrates how to use Data Commons to build a linear regression model predicting the prevalence of obesity in US counties.\n",
3412
"\n",
35-
"**Background:** Obesity prevalence is known to correlate with various health and socio-economic factors [[1]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3198075/)[[2]](https://www.ncbi.nlm.nih.gov/pubmed/26562758). Data for these factors often reside in separate datasets from different government agencies.\n",
13+
"**Background:** Obesity prevalence is known to correlate with various health and socio-economic factors [[1]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3198075/)[[2]](https://www.ncbi.nlm.nih.gov/pubmed/26562758). Data for these factors often reside in separate datasets from different government agencies:\n",
3614
"* The Centers for Disease Control (CDC) provides health condition prevalence data (e.g., obesity, high blood pressure).\n",
3715
"* The US Bureau of Labor Statistics (BLS) provides unemployment rates.\n",
3816
"* The US Census Bureau provides poverty rates and population counts.\n",
@@ -51,13 +29,35 @@
5129
"*Note:* The US Census also provides unemployment statistics. Using BLS data here is for demonstration purposes. Comparing results using Census unemployment data could be a potential extension."
5230
]
5331
},
32+
{
33+
"cell_type": "markdown",
34+
"metadata": {
35+
"colab_type": "text",
36+
"id": "view-in-github"
37+
},
38+
"source": [
39+
"<a href=\"https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/analyzing_obesity_prevalence.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
40+
]
41+
},
42+
{
43+
"cell_type": "markdown",
44+
"metadata": {
45+
"id": "srAnaUPPbrH6"
46+
},
47+
"source": [
48+
"Copyright 2025 Google LLC.\n",
49+
"SPDX-License-Identifier: Apache-2.0\n",
50+
"\n",
51+
"**Notebook Version** - 2.0.0"
52+
]
53+
},
5454
{
5555
"cell_type": "markdown",
5656
"metadata": {
5757
"id": "7SnIECsk7Csw"
5858
},
5959
"source": [
60-
"# 1. Setup Environment\n"
60+
"## 1. Set up environment\n"
6161
]
6262
},
6363
{
@@ -66,7 +66,7 @@
6666
"id": "pysygfoq43NF"
6767
},
6868
"source": [
69-
"### 1.1 Install Libraries\n",
69+
"### 1.1. Install libraries\n",
7070
"\n",
7171
"Install the [datacommons-client](https://pypi.org/project/datacommons-client/) library."
7272
]
@@ -92,7 +92,7 @@
9292
}
9393
],
9494
"source": [
95-
"!pip install datacommons-client --upgrade --quiet"
95+
"!pip install \"datacommons-client[Pandas]\" --upgrade --quiet"
9696
]
9797
},
9898
{
@@ -101,7 +101,7 @@
101101
"id": "BtLVyFoN5AiI"
102102
},
103103
"source": [
104-
"### 1.2 Import Dependencies\n",
104+
"### 1.2. Import dependencies\n",
105105
"\n",
106106
"Import required libraries for data manipulation, modeling, and plotting.\n"
107107
]
@@ -134,7 +134,7 @@
134134
"id": "ZXzO6qSc5Xk0"
135135
},
136136
"source": [
137-
"### 1.3 Initialize Data Commons Client\n",
137+
"### 1.3. Initialize Data Commons client\n",
138138
"\n",
139139
"Initialize the client using your Data Commons API key. Obtain a key from [apikeys.datacommons.org](https://apikeys.datacommons.org/) if you don't have one.\n"
140140
]
@@ -158,7 +158,7 @@
158158
"id": "Ccy9-czCfVTn"
159159
},
160160
"source": [
161-
"## 2. Data Acquisition\n",
161+
"## 2. Data acquisition\n",
162162
"\n",
163163
"Fetch statistical observations for the specified variables for all US counties for the year 2021 using the [Python Data Commons API](https://docs.datacommons.org/api/python/v2/)."
164164
]
@@ -567,15 +567,15 @@
567567
"id": "z191ImVmrdds"
568568
},
569569
"source": [
570-
"## 3. Data Preparation\n",
570+
"## 3. Data preparation\n",
571571
"\n",
572572
"Process the fetched data for modeling:\n",
573573
"\n",
574574
"1. **Filter:** Keep only relevant observations based on their `measurementMethod`. For CDC data, this is typically `AgeAdjustedPrevalence`. For Census, `CensusACS5YearSurvey`, and for BLS, `BLSSeasonallyUnadjusted`.\n",
575-
"1. **Select Columns:** Keep only essential columns: `entity`, `entity_name`, `variable`, `value`.\n",
575+
"1. **Select columns:** Keep only essential columns: `entity`, `entity_name`, `variable`, `value`.\n",
576576
"1. **Pivot:** Reshape the dataframe so each variable becomes a column, indexed by county `entity` and `entity_name`.\n",
577-
"1. **Calculate Poverty Rate:** Compute the poverty rate percentage using the population count and the count of people below the poverty level.\n",
578-
"1. **Handle Missing Values:** Drop rows (counties) with any missing values for the selected variables.\n"
577+
"1. **Calculate poverty rate:** Compute the poverty rate percentage using the population count and the count of people below the poverty level.\n",
578+
"1. **Handle missing values:** Drop rows (counties) with any missing values for the selected variables.\n"
579579
]
580580
},
581581
{
@@ -975,7 +975,7 @@
975975
"id": "-ZGRFaJKdHIO"
976976
},
977977
"source": [
978-
"## 4. Exploratory Data Analysis\n",
978+
"## 4. Exploratory data analysis\n",
979979
"\n",
980980
"Visualize the relationships between the target variable (Obesity Prevalence) and the predictor variables (High Blood Pressure Prevalence, Unemployment Rate, Poverty Rate) using scatter plots. This helps assess potential correlations.\n"
981981
]
@@ -1102,7 +1102,7 @@
11021102
"id": "Bp52dWJNfYSa"
11031103
},
11041104
"source": [
1105-
"## 5. Model Training\n",
1105+
"## 5. Model training\n",
11061106
"\n",
11071107
"Train a linear regression model to predict obesity prevalence based on the selected predictors.\n",
11081108
"\n",
@@ -1111,7 +1111,7 @@
11111111
"$$f_\\theta(x) = \\theta_0 + \\theta_1 (\\text{high blood pressure}) + \\theta_2 (\\text{unemployment}) + \\theta_3(\\text{poverty rate})$$\n",
11121112
"<br>\n",
11131113
"\n",
1114-
"### 5.1 Prepare Features and Target Variable\n",
1114+
"### 5.1. Prepare features and target variable\n",
11151115
"Define the feature matrix `X` (predictors) and the target vector `Y` (obesity prevalence).\n",
11161116
"\n",
11171117
"Let's start by creating our training and test sets. We'll then train a linear regression model using Scikit learn's [LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html)"
@@ -1137,10 +1137,9 @@
11371137
"id": "rmidaLTx_6C9"
11381138
},
11391139
"source": [
1140-
"### 5.2 Split Data\n",
1140+
"### 5.2. Split data\n",
11411141
"\n",
1142-
"Split the data into training and testing sets (80% train, 20% test).\n",
1143-
"\n"
1142+
"Split the data into training and testing sets (80% train, 20% test)."
11441143
]
11451144
},
11461145
{
@@ -1176,7 +1175,7 @@
11761175
"id": "hu2t8OAGAGFp"
11771176
},
11781177
"source": [
1179-
"### 5.3 Train Linear Regression Model\n",
1178+
"### 5.3. Train linear regression model\n",
11801179
"\n",
11811180
"Instantiate and train the [LinearRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) model using the training data.\n",
11821181
"\n"
@@ -1217,12 +1216,11 @@
12171216
"id": "dBmThySxaXKp"
12181217
},
12191218
"source": [
1220-
"## 6. Model Evaluation\n",
1219+
"## 6. Model evaluation\n",
12211220
"\n",
12221221
"Assess the performance of the trained model using the Mean Squared Error (MSE) metric and residual analysis.\n",
12231222
"\n",
1224-
"\n",
1225-
"### 6.1 Calculate Mean Squared Error (MSE)\n",
1223+
"### 6.1. Calculate Mean Squared Error (MSE)\n",
12261224
"\n",
12271225
"Define a function for MSE and calculate it for both the training and test sets. Lower MSE indicates better fit.\n",
12281226
"\n"
@@ -1271,7 +1269,7 @@
12711269
"id": "VsGLliuzawPE"
12721270
},
12731271
"source": [
1274-
"### 6.2 Analyze Residuals\n",
1272+
"### 6.2. Analyze residuals\n",
12751273
"\n",
12761274
"Calculate and plot the residuals (difference between predicted and actual values) for the test set. Residuals ideally should be randomly scattered around zero."
12771275
]
@@ -1317,9 +1315,7 @@
13171315
"id": "8VE-arrmbLNL"
13181316
},
13191317
"source": [
1320-
"*Evaluation Summary:* The model achieves a test MSE of approximately 10%. The residual plots provide insights into the model's error distribution.\n",
1321-
"\n",
1322-
"\n",
1318+
"*Evaluation summary:* The model achieves a test MSE of approximately 10%. The residual plots provide insights into the model's error distribution.\n",
13231319
"\n",
13241320
"How well does your model perform? We were able to achieve an MSE for the test set of approximately 10% points from the observed obesity prevalence. Our model was also able to fit the data with the residuals clustered between -20% and 30%, which for a simple model considering only three explanatory variables isn't so bad."
13251321
]
@@ -1330,23 +1326,23 @@
13301326
"id": "qapl33x8fy_A"
13311327
},
13321328
"source": [
1333-
"## 7. Conclusion and Next Steps\n",
1329+
"## 7. Conclusion and next steps\n",
13341330
"This notebook demonstrated the use of Data Commons to efficiently acquire data from multiple sources (CDC, BLS, Census) and build a simple linear regression model to predict obesity prevalence in US counties. Data Commons significantly streamlines the data gathering and integration process.\n",
13351331
"\n",
13361332
"The resulting model, using high blood pressure prevalence, unemployment rate, and poverty rate, provides a baseline prediction.\n",
13371333
"\n",
1338-
"**Potential Improvements & Further Exploration:**\n",
1334+
"**Potential improvements & further exploration:**\n",
13391335
"\n",
13401336
"* Add More Variables: Incorporate other variables known or hypothesized to correlate with obesity, such as:\n",
13411337
" * `Percent_Person_WithHighCholesterol`\n",
13421338
" * `Percent_Person_WithDiabetes`\n",
13431339
" * Educational attainment levels\n",
13441340
" * Access to healthy food outlets\n",
13451341
" * Physical inactivity rates\n",
1346-
"* **Feature Engineering:** Create new features from existing ones.\n",
1347-
"* **Model Selection:** Experiment with different regression models (e.g., Ridge, Lasso, tree-based models).\n",
1348-
"* **Geographic Analysis:** Explore spatial patterns in obesity prevalence and model errors.\n",
1349-
"* **Alternative Data Sources:** Compare model performance using Census unemployment data instead of BLS data.\n",
1342+
"* **Feature engineering:** Create new features from existing ones.\n",
1343+
"* **Model selection:** Experiment with different regression models (e.g., Ridge, Lasso, tree-based models).\n",
1344+
"* **Geographic analysis:** Explore spatial patterns in obesity prevalence and model errors.\n",
1345+
"* **Alternative data sources:** Compare model performance using Census unemployment data instead of BLS data.\n",
13501346
"Data Commons provides access to a wide range of variables, enabling exploration of correlations with factors like university counts, crime rates (e.g., arson), or environmental factors (e.g., snowfall), potentially leading to more comprehensive models."
13511347
]
13521348
}

0 commit comments

Comments
 (0)