Skip to content

[PULL REQUEST] Updating Mortality Calculations#104

Open
KPHSANDAG wants to merge 14 commits intomainfrom
mortality-updates
Open

[PULL REQUEST] Updating Mortality Calculations#104
KPHSANDAG wants to merge 14 commits intomainfrom
mortality-updates

Conversation

@KPHSANDAG
Copy link
Copy Markdown
Contributor

Describe this pull request. What changes are being made?

This pull request alters how mortality is calculated for the CCM. This pr mainly follows the procedures from Mortality Prediction Project (MPP), described in Issue 95. It then implements the previous CDC-related functions and splining techniques in the MPP, in addition to amending existing CCM functions.

A successful demo can be seen in run_id = 4 in [CohortComponentModel].[outputs].[rates].

What issues does this pull request address?

closes #95

Additional context

refer to Mortality Prediction Project and Issue 95 for further detail

@KPHSANDAG KPHSANDAG requested a review from Copilot February 17, 2026 23:28
@KPHSANDAG KPHSANDAG self-assigned this Feb 17, 2026
@KPHSANDAG KPHSANDAG added the enhancement New feature or request label Feb 17, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request updates the mortality calculation methodology for the Cohort Component Model (CCM) by implementing procedures from the Mortality Prediction Project (MPP). The changes primarily involve removing outdated CDC mortality data files from 2010-2020 and replacing the calculation approach with new CDC-related functions and splining techniques from the MPP.

Changes:

  • Removal of legacy CDC mortality data files for various demographic groups across multiple years (2010-2020)
  • Implementation of new mortality calculation procedures following the Mortality Prediction Project methodology
  • Integration of CDC-related functions and splining techniques into existing CCM functions

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown
Contributor

@GregorSchroeder GregorSchroeder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is initial feedback. I think once these issues are resolved we can move into simplifying the whole process as it seems overly complex at the moment. With all these files and transformations, I think it would behoove us to move towards loading in mortality for the year of interest only as opposed to doing a mass loading and then subsequent filtering in get_death_rates.

data folder

  • We only need data back to 2010, remove all the pre-2010 datasets. Calling a folder "2018_2023" when it only contains 2022+ is confusing. Also, what is going to happen to that folder name when it includes 2024? I would suggest either not differentiating by product in separate folders, because they don't overlap in terms of uploaded data, or using more generic folder names.
  • Need to add a .gitattributes file so we aren't doing full track changes for all of these mortality rate files. See https://github.com/SANDAG/CA-DOF/blob/main/.gitattributes.

parsing file names

  • Why is there a mix of .csv and .txt files that necessitates the use of get_file_separator?
  • Do we need to have all these abbreviations and mappings in parse_filename or could we just use full names and mappings? Why separate the labels and maps in valid_config, couldn't it just use map and check for valid labels? [str(y) for y in range(1999, 2024)] introduces a year-restriction that should not be in place here

parsing and transforming files

  • replace usages of .query with .loc
  • is parse_not_stated actually using the input dataframe it takes in? it looks like it just overwrites it immediately?

@KPHSANDAG
Copy link
Copy Markdown
Contributor Author

  • Why is there a mix of .csv and .txt files that necessitates the use of get_file_separator?

CDC underwent an update in which they no longer allow .txt downloads; they only have .xls, .tsv, or .csv. If we wanted to get rid of the file separator function, I need to redownload all of the .txt files and resave them as .csv. I am open to this if we think it would look better for all the files to be the same format. @GregorSchroeder what would you prefer?

Copy link
Copy Markdown
Contributor

@GregorSchroeder GregorSchroeder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was the use of the logger removed and substituted with print statements? We want to write out warnings, messages, etc.. to the log file not to print statements.

I think there is a lot of complication right now in the mortality code and most of it might be due to the file/folder structure. I think moving back to a simpler folder structure of just deaths with years as subfolders will 1) provide more organization and clarity as to what data sources are used to calculate mortality rates for a given year and 2) simplify the code allowing us to pass year as a parameter and fully isolate our calculations within a single year. Your filenames are doing all the heavy lifting in terms of identifying product and geographic grain so we don't need subfolders to provide further identification. I was thinking of pseudocode as follows under this new folder structure.

  • Create a function that takes in a single file path load_cdc_wonder
    • Check file name is consistent with pattern and return implied metadata validate_file_name
    • Load the file and check against the implied metadata
    • Load the Stated/Not Stated and apply inflation factor to deaths inflate_deaths
  • Wrapper function that takes a given year and calls the above function for each geography level parse_cdc_wonder
    • Apply substitution methodology rate_substitution
    • Apply smoothing smooth_rates
    • If a given year's product is the old CDC WONDER (can tell from file names or even just a hardcode year value) then take the "All" races data and duplicate it for "NHPI" and "Two or More"

Copy link
Copy Markdown
Contributor

@GregorSchroeder GregorSchroeder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's concentrate on naming, parsing, and validating the files then making sure we are using the metadata and a single year to drive everything.

"ASIAN": "Asian",
"BAA": "Black or African American",
"MOR": "More than one race",
"HIS": "Hispanic",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be no Hispanic race.

"NHPI": "Native Hawaiian or Other Pacific Islander",
"WH": "White",
"ALL": "All",
"NA": "Not Available",
Copy link
Copy Markdown
Contributor

@GregorSchroeder GregorSchroeder Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At your direction we are not exporting the NAs as they are all 0s and do not need to be incorporated into Not Stated inflation factor. Remove.

f"Metadata hispanic: '{metadata['hispanic']}' does not match file contents."
)

# Race validation is done explicitly excepting for All and Hispanic
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no Hispanic race remove logic

)
)

# Separate data by status (check if age_group is "SYA NS")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about the hispanic not stated?

# Use unified load_cdc_wonder function
df = load_cdc_wonder(file_path)
if not df.empty:
location = df["location"].iloc[0]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just use the metadata no need to set a new variable

# Population for 2022+ county level is suppressed and will be replaced
# with yearly CCM population estimates which require annual death counts to
# calculate rates
deaths=lambda x: np.where(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of year and location shouldn't we just use the metadata product? this is consistent with later usage in this file

and location == "San Diego County"
and pop_df is not None
):
pop_df_with_year = pop_df.copy()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just do it all in the df assignment
why do the data types need to be set in pop_df?

stated_df = df.loc[
lambda x: (x["hispanic origin"] != "Not Stated")
& (x["age"] != "NS")
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this already be filtered in the load_cdc_wonder?

pd.DataFrame: A single DataFrame for the specified year with data for
ages 0-99 from the CDC.
"""

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would simply declare a list and append to it then use pd.concat later
we have the location field so there's not need for dictionary and we are using year-specific loading so no need for product specification although that is also a field

county = pd.concat(county_dfs, ignore_index=True)
state = pd.concat(state_dfs, ignore_index=True)
national = pd.concat(national_dfs, ignore_index=True)
substituted_df = merge_geographies(county, state, national)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of a function couldn't we take the single combined dataframe with the location field and put the rates in wide format by location then just calculate our rate using a conditional is null? you had said previously that the substitution methodology is sufficient so we can remove all the rate shifting and imputation logic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Update observed mortality rate calculation

3 participants