Semantic Embedding Augmentation of USDA’s Food Nutrient Imputation test#
INTRODUCTION#
The USDA National Nutrient Database for Standard Reference provides comprehensive nutrient content information for approximately 7,800 foods and 150 nutrients. While this dataset could theoretically contain around 1.17 million food-nutrient pairings, only 31% (~360,000) are direct measurements. The remaining data is either missing (45%) or estimated through USDA imputation methods (24%).
This research aims to develop machine learning models to predict missing nutrient values in the USDA database. Rather than addressing all nutrients simultaneously, we will focus on a single nutrient (selected through exploratory data analysis) to demonstrate the potential of our approach. The models will leverage existing nutrient measurements and food name description embeddings as predictive features.
The primary objective is twofold: first, to accurately predict the selected nutrient’s content in foods based on other available data, and second, to compare our model’s predictions against USDA’s current imputation methods. We will train our models exclusively on measured data, holding out 20% of foods for testing, to enable direct comparison with USDA estimates and evaluate the model’s generalization capabilities.
This research could potentially improve the quality and reliability of the USDA dataset by providing data-driven alternatives to current imputation methods. Success in this endeavor would contribute to more accurate nutritional information for research, policy-making, and public health applications.
This project uses a traditional predictive modeling approach centers on a single nutrient. My TA, Robin Liu, suggested matrix completion as an alternative. This method could predict all missing nutrient values at once, rather than one nutrient at a time. However, the single-nutrient focus aligns better with the course requirements and simplifies evaluation, making it easier to assess performance and interpret results. Future work could explore matrix completion to predict multiple nutrients simultaneously, incorporating factors like food types and production methods as additional inputs.
DEPENDENCY SETUP#
Database Connection#
We uploaded the USDA data into PostgreSQL:
Data integrity and consistency: SQL databases ensure reliable data with constraints, relationships, and ACID transactions.
Efficient querying: Indexes and query optimization allow faster filtering and retrieval compared to flat files.
Schema enforcement: Prevents data corruption by enforcing structure and rules.
For the USDA database specifically, PostgreSQL offers additional advantages:
Remote access: Team members can collaborate from anywhere.
Complex queries: Easy handling of relationships and multi-table joins.
Better performance: Handles large datasets efficiently.
Data validation: Maintains referential integrity to prevent errors.
These benefits make PostgreSQL a better choice than flat files or isolated server storage.
import pandas as pd
from sqlalchemy import create_engine
from sqlalchemy.types import String
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from dotenv import load_dotenv
import os
RANDOM_SEED = 42
use_best_params=True
load_dotenv()
PGPASS = os.getenv('PGPASSWORD')
db_params = {
'host': 'aws-0-us-east-1.pooler.supabase.com',
'database': 'postgres',
'user': 'postgres.tcfushkpetfaqsorgwww',
'password': PGPASS,
'port': '6543'
}
engine = create_engine(f"postgresql://{db_params['user']}:{db_params['password']}@{db_params['host']}:{db_params['port']}/{db_params['database']}")
Selective Load File#
Some operations (like database setup, etc) are carried out using bash and sql scripts.
The following function loads the important parts of these for display:
def selective_load_file(filename):
"""
Read and return the contents of a file, removing SQL and bash style comments
until encountering '**SUMMARY**'.
Args:
filename (str): Path to the file to be read
Returns:
str: Processed contents of the file
Raises:
FileNotFoundError: If the specified file doesn't exist
IOError: If there's an error reading the file
"""
try:
with open(filename, 'r') as file:
lines = file.readlines()
processed_lines = []
found_summary = False
for line in lines:
# Check for summary marker
if '**SUMMARY**' in line:
found_summary = True
# Before summary: remove comments
if not found_summary:
# Skip empty lines or lines that are only whitespace
if not line.strip():
continue
# Skip SQL style comments (--) and bash style comments (#)
if line.strip().startswith('--') or line.strip().startswith('#'):
continue
processed_lines.append(line)
return ''.join(processed_lines)
except FileNotFoundError:
print(f"Error: File '{filename}' not found")
except IOError as e:
print(f"Error reading file: {e}")
Dataframe Cache#
Some dataframe construction / database fetch operations can take a long time.
We wrap time consuming operations using the following function:
import os
import pandas as pd
def read_df_from_cache_or_create(key: str, createdf = None) -> pd.DataFrame:
"""
Returns a DataFrame either from cache or creates it using the provided function.
The supplied function may load the dataframe from a remote database or it may create it in some other way.
If createdf is None, any existing cache for the given key is cleared.
Parameters:
key (str): Cache key to identify the DataFrame
createdf (callable, optional): Function that returns a pandas DataFrame.
If None, clears cache for the given key.
Returns:
pd.DataFrame: The cached or newly created DataFrame.
Returns None if createdf is None (cache clearing mode).
"""
# Ensure cache directory exists
os.makedirs('_cache', exist_ok=True)
# Construct cache file path
cache_path = os.path.join('_cache', f'{key}.df')
# If createdf is None, clear cache and return
if createdf is None:
if os.path.exists(cache_path):
try:
os.remove(cache_path)
print(f"Cache cleared for key: {key}")
except Exception as e:
print(f"Error clearing cache for key {key}: {e}")
return None
# Check if cache file exists
if os.path.exists(cache_path):
try:
# Load from cache
return pd.read_pickle(cache_path)
except Exception as e:
print(f"Error reading cache file: {e}")
# If there's an error reading cache, proceed to create new DataFrame
# Create new DataFrame
df = createdf()
# Validate that we got a DataFrame
if not isinstance(df, pd.DataFrame):
raise ValueError("createdf function must return a pandas DataFrame")
try:
# Save to cache
df.to_pickle(cache_path)
except Exception as e:
print(f"Warning: Could not save to cache: {e}")
return df
Run Once Tracker#
Some operations (winsorization, etc) must not run more than once on the same data.
We wrap such run once operations using the following function:
class RunOnceTracker:
_executed_keys = set()
def run_once(key=None, target=None):
"""
Executes the target function only if the key hasn't been used before.
If key is None, clears all tracked keys.
If target is None, clears just the specified key.
Args:
key: Unique identifier for this execution (string or hashable type), or None to clear all
target: Function to execute, or None to clear the specified key
Returns:
Result from target() if executed, None if skipped or clearing
"""
# Case 1: Clear all keys
if key is None:
RunOnceTracker._executed_keys.clear()
print("Cleared all execution keys")
return None
# Case 2: Clear specific key
if target is None:
if key in RunOnceTracker._executed_keys:
RunOnceTracker._executed_keys.remove(key)
print(f"Cleared execution key: '{key}'")
else:
print(f"Key '{key}' was not found in execution tracking")
return None
# Case 3: Normal execution check
if key in RunOnceTracker._executed_keys:
print(f"run_once() key '{key}' has already been used. No action taken.")
return None
# Case 4: Execute target function and store key
result = target()
RunOnceTracker._executed_keys.add(key)
return result
PRIMARY DATASET: USDA’s Food Nutrition Database#
Citation and URL#
The US Department of Agriculture (USDA) maintains a comprehensive database of food nutrient information, documenting the nutritional content (proteins, fats, vitamins, minerals, etc.) of thousands of food items.
Sem USDA National Nutrient Database (primary dataset)
The USDA National Nutrient Database for Standard Reference provides detailed nutrient content information on various foods, compiled by the U.S. Department of Agriculture’s Agricultural Research Service. The main data table links foods to specific nutrient quantities, i.e., “Food {F} contains {A} amount of nutrient {N}.”
Haytowitz, David B.; Ahuja, Jaspreet K.C.; Wu, Xianli; Somanchi, Meena; Nickle, Melissa; Nguyen, Quyen A.; Roseland, Janet M.; Williams, Juhi R.; Patterson, Kristine Y.; Li, Ying; Pehrsson, Pamela R. (2019). USDA National Nutrient Database for Standard Reference, Legacy Release. Nutrient Data Laboratory, Beltsville Human Nutrition Research Center, ARS, USDA. https://doi.org/10.15482/USDA.ADC/1529216
@misc{haytowitz2019usda,
title = {{USDA National Nutrient Database for Standard Reference, Legacy Release}},
author = {Haytowitz, David B. and Ahuja, Jaspreet K.C. and Wu, Xianli and Somanchi, Meena and Nickle, Melissa and Nguyen, Quyen A. and Roseland, Janet M. and Williams, Juhi R. and Patterson, Kristine Y. and Li, Ying and Pehrsson, Pamela R.},
year = {2019},
publisher = {{Nutrient Data Laboratory, Beltsville Human Nutrition Research Center, ARS, USDA}},
url = {https://catalog.data.gov/dataset/usda-national-nutrient-database-for-standard-reference-legacy-release-d1570},
doi = {10.15482/USDA.ADC/1529216}
}
This USDA database is a key resource for nutritional information, offering detailed data on approximately 150 nutrients across roughly 7,800 food items. Nutrients include vitamins, minerals, macronutrients, and other compounds essential for health. However, the database has a significant limitation: only 31% of possible food-nutrient pairings are directly measured, 24% are estimates, and 45% are missing entirely. This issue creates challenges in using the USDA database for accurate dietary analysis and research.
Database Setup#
We start by converting and importing the USDA National Nutrient Database (Legacy Release) from its native Access database format (.accdb) into a PostgreSQL database hosted on Supabase.
There are several steps:
download and unzip the source USDA database files
use mdbtools to extract table schemas and data.
convert the Access database schema to PostgreSQL-compatible data types,
copy all table data using CSV as an intermediate format, and finally
run additional SQL scripts to create indexes, views, and tables for semantic embeddings
Our bash script includes connection handling for Supabase and ppsql
wrapper function for database access:
## DOWNLOAD SR-Leg_DB.zip from https://agdatacommons.nal.usda.gov/articles/dataset/USDA_National_Nutrient_Database_for_Standard_Reference_Legacy_Release/24661818
## UNZIP to create SR_Legacy.accdb
function ppsql() {
(
. .env
# export PGPASSWORD=XXXXXXXXX
psql \
-h aws-0-us-east-1.pooler.supabase.com \
-p 6543 \
-d postgres \
-U postgres.tcfushkpetfaqsorgwww \
"$@"
)
} ; export -f ppsql
apt install mdbtools
mdb-tables SR_Legacy.accdb \
| tr " " "\n" \
| grep -v '^$' \
| while read t ; do echo "drop table if exists $t cascade;" ; done \
| ppsql
# s/Text(\(\d+\))?/VARCHAR(255)/g;
mdb-schema SR_Legacy.accdb \
| tr -d '[]' \
| perl -pe '
s/Text/VARCHAR/g;
s/Long Integer/INTEGER/g;
s/Integer/INTEGER/g;
s/Single/REAL/g;
s/Double/DOUBLE PRECISION/g;
s/DateTime/TIMESTAMP/g;
s/Currency/DECIMAL(19,4)/g;
s/Yes\/No/BOOLEAN/g;
s/Byte/SMALLINT/g;
s/Memo/TEXT/g;
' \
| tee schema.sql \
| ppsql
mdb-tables SR_Legacy.accdb \
| tr " " "\n" \
| grep -v '^$' \
| while read table ; do
echo $table
mdb-export SR_Legacy.accdb $table \
| ppsql -c "
COPY $table
-- (nutr_no, units, tagname, nutrdesc, num_dec, sr_order)
FROM STDIN
WITH (
FORMAT CSV,
HEADER TRUE,
DELIMITER ',',
QUOTE '\"'
);"
done
cat index_0011.sql | ppsql # link and index tables
cat iron_foods_0012.sql | psql # create iron foods view
cat documents_0010.sql | ppsql # documents table stores embeddings
cat einput_0010.sql | ppsql # einput view stores values that still need embeddings
The setup performed by the referenced *.sql
files is explained in the sections that follow.
Data Selection: Iron Nutrient Foods and Peer Nutrients#
USDA’s database has a dozen tables with parent child relationships explained in a SR-Legacy_Doc.pdf
that comes with the database.
We created the a data view to focus our investigation on a managable data subset:
Food Nutrient Rows#
The iron_foods
view tracks nutrient content of foods using a narrow format, where:
Each row represents a unique food-nutrient pair (F,N)
food_id and food_name refer to a 100g edible portion of some food N
nutrient_id and nutrient_name refer to some nutirent N
For each (F,N) pair, USDA records:
The nutrient content value (nutrient_value)
The units (unit_of_measure) for nutrient_value
How the nutrient_value was determined (source_type)
If
source_type = 1
, the nutrient_value was measured by USDA.If
source_type in (4, 7, 8, 9)
then the nutrient_value is a calculated estimate (USDA imputation, etc).
query = """
SELECT *
FROM iron_foods
"""
food_nutrient_rows = read_df_from_cache_or_create(
"food_nutrient_rows",
lambda: pd.read_sql_query(query, engine)
)
# print(f"food_nutrient_rows stats:")
unique_food_count = food_nutrient_rows['food_id'].nunique()
print(f"\tfood_id unique count: {unique_food_count}")
print(f"\tfood_name unique count: {food_nutrient_rows['food_name'].nunique()}")
print(f"\tfood_group unique count: {food_nutrient_rows['food_group'].nunique()}")
print(f"\tnutrient_id unique count: {food_nutrient_rows['nutrient_id'].nunique()}")
print(f"\tnutrient_name unique count: {food_nutrient_rows['nutrient_name'].nunique()}")
# print(f"\n")
display(food_nutrient_rows.info())
food_nutrient_rows
food_id unique count: 7713
food_name unique count: 7713
food_group unique count: 25
nutrient_id unique count: 20
nutrient_name unique count: 17
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 131300 entries, 0 to 131299
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 food_id 131300 non-null object
1 food_name 131300 non-null object
2 nutrient_id 131300 non-null object
3 nutrient_name 131300 non-null object
4 unit_of_measure 131300 non-null object
5 nutrient_value 131300 non-null float64
6 standard_error 42762 non-null float64
7 number_of_samples 131300 non-null int64
8 min_value 27931 non-null float64
9 max_value 27931 non-null float64
10 food_group 131300 non-null object
11 source_type 131300 non-null object
dtypes: float64(4), int64(1), object(7)
memory usage: 12.0+ MB
None
food_id | food_name | nutrient_id | nutrient_name | unit_of_measure | nutrient_value | standard_error | number_of_samples | min_value | max_value | food_group | source_type | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 617 | Oleic fatty acid | g | 0.156 | 0.004 | 5 | NaN | NaN | Dairy and Egg Products | 1 |
1 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 618 | Linoleic fatty acid | g | 0.018 | 0.004 | 5 | NaN | NaN | Dairy and Egg Products | 1 |
2 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 626 | Palmitoleic fatty acid | g | 0.021 | 0.002 | 5 | NaN | NaN | Dairy and Egg Products | 1 |
3 | 01093 | Milk, dry, nonfat, calcium reduced | 203 | Protein | g | 35.500 | NaN | 1 | NaN | NaN | Dairy and Egg Products | 1 |
4 | 01093 | Milk, dry, nonfat, calcium reduced | 207 | Ash | g | 7.600 | NaN | 1 | NaN | NaN | Dairy and Egg Products | 1 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
131295 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 309 | Zinc, Zn | mg | 4.410 | 0.124 | 10 | NaN | NaN | Dairy and Egg Products | 1 |
131296 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 312 | Copper, Cu | mg | 0.041 | NaN | 0 | NaN | NaN | Dairy and Egg Products | 1 |
131297 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 404 | Thiamin | mg | 0.413 | 0.006 | 31 | NaN | NaN | Dairy and Egg Products | 1 |
131298 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 405 | Riboflavin | mg | 1.744 | 0.040 | 31 | NaN | NaN | Dairy and Egg Products | 1 |
131299 | 01092 | Milk, dry, nonfat, instant, with added vitamin... | 406 | Niacin | mg | 0.891 | 0.042 | 29 | NaN | NaN | Dairy and Egg Products | 1 |
131300 rows × 12 columns
Nutrient Units of Measure#
For any given nutrient and food in the USDA database, nutrient content is reported per 100 grams of the edible portion of that food.
The units of nutrient content measurement vary by nutrient type:
Macronutrients (like protein, fat, carbohydrates): measured in grams (g)
Minerals and some vitamins: measured in milligrams (mg)
Trace elements and some vitamins: measured in micrograms (µg)
Certain vitamins (A, D, E): measured in International Units (IU)
For example, in 100g of raw apple (just edible portion with core and pits removed):
Carbohydrates would be reported in grams
Potassium would be reported in milligrams
Vitamin B12 would be reported in micrograms
Vitamin D would be reported in International Units
The table below shows the measurement units used for nutrients in our study.
food_nutrient_rows.groupby('nutrient_name')['unit_of_measure'].unique()
nutrient_name
Ash [g]
Calcium, Ca [mg]
Copper, Cu [mg]
Iron, Fe [mg]
Linoleic fatty acid [g]
Magnesium, Mg [mg]
Niacin [mg]
Oleic fatty acid [g]
Palmitoleic fatty acid [g]
Phosphorus, P [mg]
Potassium, K [mg]
Protein [g]
Riboflavin [mg]
Sodium, Na [mg]
Thiamin [mg]
Water [g]
Zinc, Zn [mg]
Name: unit_of_measure, dtype: object
Pivot to Food Rows#
For convience we pivot the narrow food_nutrient_rows
dataframe into a wide format food_rows
dataframe so that:
Each row represents one unique food
Columns contain information about that food (name, food group, nutrient content, etc.)
def pivot_to_food_rows5(narrow_df):
# selection_mask = (narrow_df['nutrient_name'] == 'Iron, Fe') & (narrow_df['source_type'] == '1')
selection_mask = (narrow_df['nutrient_name'] == 'Iron, Fe')
iron_food_id = narrow_df[selection_mask]['food_id'].unique()
source_type_df = narrow_df[
selection_mask
][['food_id', 'source_type']]
# print("source_type_df")
# with pd.option_context('display.max_rows', 100):
# print(source_type_df.tail(1000))
pivot_df = narrow_df[
narrow_df['food_id'].isin(iron_food_id)
][['food_id', 'food_name', 'food_group', 'nutrient_name', 'nutrient_value']]
wide_df = pivot_df.pivot_table(
index=['food_id', 'food_name', 'food_group'],
columns='nutrient_name',
values='nutrient_value',
aggfunc='first'
).reset_index()
# Join source_type_df with wide_df on food_id column
wide_df = wide_df.merge(source_type_df, on='food_id', how='inner')
wide_df.columns.name = None
# first_cols = ['food_id', 'food_name', 'food_group', "iron_source_type"] # Update to use new column name
first_cols = ['food_id', 'food_name', 'food_group', "source_type"]
other_cols = sorted([col for col in wide_df.columns if col not in first_cols])
wide_df = wide_df[first_cols + other_cols]
return wide_df
food_rows = pivot_to_food_rows5(food_nutrient_rows)
# print(food_rows.shape)
# print(food_rows.info())
food_rows
food_id | food_name | food_group | source_type | Ash | Calcium, Ca | Copper, Cu | Iron, Fe | Linoleic fatty acid | Magnesium, Mg | ... | Oleic fatty acid | Palmitoleic fatty acid | Phosphorus, P | Potassium, K | Protein | Riboflavin | Sodium, Na | Thiamin | Water | Zinc, Zn | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 01001 | Butter, salted | Dairy and Egg Products | 1 | 2.11 | 24.0 | 0.000 | 0.02 | 2.728 | 2.0 | ... | 19.961 | 0.961 | 24.0 | 24.0 | 0.85 | 0.034 | 643.0 | 0.005 | 16.17 | 0.09 |
1 | 01002 | Butter, whipped, with salt | Dairy and Egg Products | 1 | 1.62 | 23.0 | 0.010 | 0.05 | 2.713 | 1.0 | ... | 17.370 | 1.417 | 24.0 | 41.0 | 0.49 | 0.064 | 583.0 | 0.007 | 16.72 | 0.05 |
2 | 01003 | Butter oil, anhydrous | Dairy and Egg Products | 4 | 0.00 | 4.0 | 0.001 | 0.00 | 2.247 | 0.0 | ... | 25.026 | 2.228 | 3.0 | 5.0 | 0.28 | 0.005 | 2.0 | 0.001 | 0.24 | 0.01 |
3 | 01004 | Cheese, blue | Dairy and Egg Products | 1 | 5.11 | 528.0 | 0.040 | 0.31 | 0.536 | 23.0 | ... | 6.622 | 0.816 | 387.0 | 256.0 | 21.40 | 0.382 | 1146.0 | 0.029 | 42.41 | 2.66 |
4 | 01005 | Cheese, brick | Dairy and Egg Products | 1 | 3.18 | 674.0 | 0.024 | 0.43 | 0.491 | 24.0 | ... | 7.401 | 0.817 | 451.0 | 136.0 | 23.24 | 0.351 | 560.0 | 0.014 | 41.11 | 2.60 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
7708 | 83110 | Fish, mackerel, salted | Finfish and Shellfish Products | 1 | 13.40 | 66.0 | 0.100 | 1.40 | 0.369 | 60.0 | ... | 4.224 | 1.495 | 254.0 | 520.0 | 18.50 | 0.190 | 4450.0 | 0.020 | 43.00 | 1.10 |
7709 | 90240 | Mollusks, scallop, (bay and sea), cooked, steamed | Finfish and Shellfish Products | 4 | 2.97 | 10.0 | 0.033 | 0.58 | 0.014 | 37.0 | ... | 0.053 | 0.015 | 426.0 | 314.0 | 20.54 | 0.024 | 667.0 | 0.012 | 70.25 | 1.55 |
7710 | 90480 | Syrup, Cane | Sweets | 1 | 0.86 | 13.0 | 0.020 | 3.60 | 0.000 | 10.0 | ... | 0.000 | 0.000 | 8.0 | 63.0 | 0.00 | 0.060 | 58.0 | 0.130 | 26.00 | 0.19 |
7711 | 90560 | Mollusks, snail, raw | Finfish and Shellfish Products | 1 | 1.30 | 10.0 | 0.400 | 3.50 | 0.017 | 250.0 | ... | 0.211 | 0.048 | 272.0 | 382.0 | 16.10 | 0.120 | 70.0 | 0.010 | 79.20 | 1.00 |
7712 | 93600 | Turtle, green, raw | Finfish and Shellfish Products | 1 | 1.20 | 118.0 | 0.250 | 1.40 | 0.033 | 20.0 | ... | 0.073 | 0.015 | 180.0 | 230.0 | 19.80 | 0.150 | 68.0 | 0.120 | 78.50 | 1.00 |
7713 rows × 21 columns
Data Source Food Rows#
To show what nutrition infromation the USDA has we will also load a table having rows for every food F and columns for every nutrient N and containing the value -1 if (F,N) has no data, 0 if the USDA has calculated or imputed a value and 1 if the (F,N) combination has been measured.
We created this table in SQL as shown below and will use it later to visualize the contents of the USDA dataset.
WITH nutrient_names AS (
SELECT DISTINCT nutrient_name
FROM food_nutrient_rows
ORDER BY nutrient_name
),
pivot_columns AS (
SELECT string_agg(
format('COALESCE(MAX(CASE
WHEN nutrient_name = %L THEN
CASE
WHEN data_source = ''measured'' THEN 1
WHEN data_source = ''assumed'' THEN 0
END
END), -1) AS "n:%s"',
nutrient_name,
nutrient_name),
', '
) AS columns
FROM nutrient_names
)
SELECT
'DROP VIEW IF EXISTS data_source_food_rows cascade; ' ||
'CREATE VIEW data_source_food_rows AS ' ||
'SELECT food_id, food_name, food_group, ' ||
(SELECT columns FROM pivot_columns) ||
' FROM food_nutrient_rows GROUP BY food_id, food_name, food_group ORDER BY food_id;';
data_source_food_rows = read_df_from_cache_or_create(
"data_source_food_rows",
lambda: pd.read_sql_query("SELECT * FROM data_source_food_rows", engine)
)
data_source_food_rows
food_id | food_name | food_group | n:Adrenic fatty acid | n:Alanine | n:Alcohol, ethyl | n:Alpha-linolenic fatty acid (ALA) | n:Arachidic fatty acid | n:Arachidonic fatty acid | n:Arachidonic fatty acid (ARA) | ... | n:Vitamin C, total ascorbic acid | n:Vitamin D | n:Vitamin D2 (ergocalciferol) | n:Vitamin D3 (cholecalciferol) | n:Vitamin D (D2 + D3) | n:Vitamin E, added | n:Vitamin E (alpha-tocopherol) | n:Vitamin K (phylloquinone) | n:Water | n:Zinc, Zn | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 01001 | Butter, salted | Dairy and Egg Products | -1 | 1 | 0 | 1 | 1 | 1 | -1 | ... | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 |
1 | 01002 | Butter, whipped, with salt | Dairy and Egg Products | 1 | 1 | 0 | 1 | 1 | 1 | -1 | ... | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 |
2 | 01003 | Butter oil, anhydrous | Dairy and Egg Products | -1 | 0 | 0 | -1 | -1 | 1 | -1 | ... | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
3 | 01004 | Cheese, blue | Dairy and Egg Products | -1 | 1 | 0 | -1 | -1 | 1 | -1 | ... | 1 | 0 | -1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 |
4 | 01005 | Cheese, brick | Dairy and Egg Products | -1 | 1 | 0 | -1 | -1 | 1 | -1 | ... | 1 | 0 | -1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
7788 | 83110 | Fish, mackerel, salted | Finfish and Shellfish Products | -1 | -1 | 0 | -1 | -1 | 0 | -1 | ... | 0 | 0 | -1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |
7789 | 90240 | Mollusks, scallop, (bay and sea), cooked, steamed | Finfish and Shellfish Products | 0 | 0 | 0 | -1 | 0 | 0 | -1 | ... | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
7790 | 90480 | Syrup, Cane | Sweets | -1 | -1 | 0 | -1 | -1 | 0 | -1 | ... | 1 | 0 | -1 | -1 | 0 | 0 | 0 | 0 | 1 | 1 |
7791 | 90560 | Mollusks, snail, raw | Finfish and Shellfish Products | -1 | -1 | 0 | -1 | -1 | 0 | -1 | ... | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7792 | 93600 | Turtle, green, raw | Finfish and Shellfish Products | -1 | -1 | 0 | -1 | -1 | 0 | -1 | ... | 0 | 0 | -1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
7793 rows × 147 columns
SECONDARY DATASET: OpenAI’s Embedding#
For every food_name, food_group, and nutrient_name we have OpenAI compute an embedding vector which we save to database table called documents.
Datasource Citation#
OpenAI’s Semantic Embedding (from “text-embedding-3-small” model) is a secondary dataset we will use to augment USDA’s food nutrient dataset:
@misc{openai_text_embedding_3_small,
author = {OpenAI},
title = {OpenAI's Embedding Model text-embedding-3-small},
year = {2024},
note = {Semantic numerical representation of food\_name and food\_group},
howpublished = {\url{https://platform.openai.com/docs/guides/embeddings}},
}
This augmentation offers a high dimensional spatial representation for textual attributes like food_name, food_group and nutrient name.
Database Setup#
We prepare our database as follows:
Database Setup:
Enable the
vector
extension for PostgreSQL to handle embeddingsCreate a
documents
table that stores text content and its 1536-dimensional vector representation (embedding)
Performance Optimization:
Create an IVF-Flat index for fast similarity searches
Use cosine similarity as the distance metric
Set 400 clusters (based on square root of expected rows * 4) for the index organization
IVF-Flat divides vectors into clusters to speed up searches
Search Function:
Create a
match_documents
function that:Take a query vector, similarity threshold, and how many matches to return
Use the cosine distance operator (
<=>
) to find similar documentsReturn documents ordered by similarity, filtering out low-similarity matches
Return the document ID, content, and calculated similarity score
This one time database setup allows SQL queries to efficiently find food terms that are conceptually similar, rather than just exact text matches.
create extension vector;
drop table if exists documents cascade;
create table documents (
id bigserial primary key,
content text,
embedding vector(1536)
);
create index on documents using ivfflat (embedding vector_cosine_ops)
with
(lists = 400); -- sqrt(7777)*4
-- Each distance operator requires a different type of index.
-- We expect to order by cosine distance, so we need vector_cosine_ops index.
-- A good starting number of lists is 4 * sqrt(table_rows):
create or replace function match_documents (
query_embedding vector(1536),
match_threshold float,
match_count int
)
returns table (
id bigint,
content text,
similarity float
)
language sql stable
as $$
select
documents.id,
documents.content,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where documents.embedding <=> query_embedding < 1 - match_threshold
order by documents.embedding <=> query_embedding
limit match_count;
$$;
Embedding Inputs#
Computation of embeddings is a one time process that reads strings that still need embedding from a database einput
view:
CREATE VIEW einput AS WITH dummy AS
(SELECT 1),
fnames AS
(SELECT food_name AS einput
FROM iron_foods),
nnames AS
(SELECT nutrient_name AS einput
FROM iron_foods),
gnames AS
(SELECT food_group AS einput
FROM iron_foods),
tunion AS
(SELECT einput
FROM fnames
UNION SELECT einput
FROM nnames
UNION SELECT einput
FROM gnames)
SELECT einput
FROM tunion
WHERE TRUE
AND einput NOT IN
(SELECT content
FROM documents);
The einput
view creates a list of text entries to be added to our documents
table:
It pulls three types of text strings from the
iron_foods
view we looked at earlier:Food names (e.g., “Spinach, raw”)
Nutrient names (e.g., “Iron, Fe”)
Food group names (e.g., “Vegetables and Vegetable Products”)
It combines all these strings into one list using UNION
Finally, it filters out any strings that already exist in a
documents
table
The end result is an list of food names, nutrient names, or food group names from the iron_foods
view that haven’t yet been added the document
table.
Embedding API ETL#
Setup:
Establish connections to Supabase (a PostgreSQL database service) and OpenAI’s API
Extract (
get_documents
):Retrieve food-related terms from the
einput
view we looked at earlierUse pagination to handle large datasets (100 records at a time)
Include retry logic to handle potential API failures
Transform (
get_embedding
):Take each food-related term
Send it to OpenAI’s
text-embedding-3-small
modelGet back a numerical vector representation (embedding) of the term
Load:
Store each term and its embedding in the
documents
tableThis is why the previous
einput
view filtered against thedocuments
table - to avoid duplicate processing
Once this process completes our database is loaded with embedding vectors:
import os
from supabase import create_client, Client
supabase_url = "https://tcfushkpetfaqsorgwww.supabase.co"
supabase_key = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# Initialize Supabase client
supabase: Client = create_client(supabase_url, supabase_key)
import openai
from google.colab import userdata
api_key = userdata.get('OPENAI_API_KEY')
client = openai.OpenAI(api_key=api_key)
def get_embedding(text, model="text-embedding-3-small"):
text = text.replace("\n", " ")
return client.embeddings.create(input = [text], model=model).data[0].embedding
def get_documents():
all_documents = []
page_size = 100 # Reduced from 1000 to 100 to process smaller chunks
start = 0
max_retries = 3
while True:
retry_count = 0
while retry_count < max_retries:
try:
response = supabase.table('einput')\
.select('einput')\
.range(start, start + page_size - 1)\
.execute()
documents = [record['einput'] for record in response.data]
all_documents.extend(documents)
# If we got less than a full page, we're done
if len(documents) < page_size:
return all_documents
# Move to next page
start += page_size
break # Break retry loop on success
except Exception as e:
retry_count += 1
if retry_count == max_retries:
print(f"Failed after {max_retries} attempts. Error: {str(e)}")
# Return what we have so far
return all_documents
print(f"Retry {retry_count} after error: {str(e)}")
time.sleep(2) # Wait 2 seconds before retrying
return all_documents
def generate_embeddings():
documents = get_documents() # Load documents to process
for document in documents:
embedding = get_embedding(document)
data = {
"content": document,
"embedding": embedding
}
supabase.table("documents").insert(data).execute()
generate_embeddings()
Load Embeddings#
We retrieve embeddings from our database in smaller, manageable batches through a technique called pagination, which helps prevent system overloads and timeouts. This ensures efficient data handling and avoids overwhelming the server with too much data at once.
def get_embeddings(engine, batch_size=1000):
"""
Retrieve all embeddings using pagination to avoid timeout.
"""
count_query = "SELECT COUNT(*) FROM documents"
total_records = pd.read_sql_query(count_query, engine).iloc[0, 0]
dfs = []
# Paginate through the results
for offset in range(0, total_records, batch_size):
query = f"""
SELECT content, embedding::text
FROM documents
ORDER BY id -- Ensure consistent ordering
LIMIT {batch_size}
OFFSET {offset};
"""
try:
batch_df = pd.read_sql_query(query, engine)
dfs.append(batch_df)
# Optional: Print progress
print(f"Fetched {min(offset + batch_size, total_records)} of {total_records} records")
except Exception as e:
print(f"Error fetching batch at offset {offset}: {str(e)}")
continue
# Combine all batches into a single DataFrame
if not dfs:
raise Exception("No data was retrieved from the database")
final_df = pd.concat(dfs, ignore_index=True)
return final_df
# doc_embed = get_embeddings(engine, batch_size=1000)
doc_embed = read_df_from_cache_or_create(
"doc_embed",
lambda: get_embeddings(engine, batch_size=1000)
)
# engine.dispose()
# doc_embed.shape
doc_embed
content | embedding | |
---|---|---|
0 | 14:0 | [0.009029812,-0.063059315,-0.019838428,0.01334... |
1 | 16:0 | [0.010453682,-0.03951149,-0.00312925,0.0124210... |
2 | 16:1 undifferentiated | [0.012321927,0.009563978,0.00871419,0.02277431... |
3 | 18:0 | [-0.022437058,-0.044308595,-0.031584367,0.0210... |
4 | 18:1 undifferentiated | [-0.015148849,0.01356421,-0.012176703,0.027537... |
... | ... | ... |
7964 | Glycine | [-0.03623759,-0.004581659,-0.037606895,0.02060... |
7965 | Eicosadienoic fatty acid | [0.006372733,0.0032185817,-0.017489873,0.02574... |
7966 | Alcoholic Beverage, wine, table, red, Gamay | [-0.046902906,-0.014858377,-0.01236847,-0.0092... |
7967 | Lignoceric fatty acid | [-0.032019954,0.024238927,-0.06936377,-0.01990... |
7968 | Hydroxyproline | [-0.017597416,-0.012690569,0.012697034,0.04271... |
7969 rows × 2 columns
Merge Embeddings#
We merge embedding vector representations to our USDA nutritional data as follows:
def split_emb(df):
df = df.copy()
df['embedding'] = df['embedding'].apply(lambda x:
np.array(eval("x.strip('[]').split(',')"), dtype=np.float32)
)
return df
def merge_foods_embeddings(df, raw_embeddings):
merged_df = df.merge(
raw_embeddings,
left_on='food_name', # For the present study only food name embedings are used.
right_on='content',
how='inner'
)
# Drop the duplicate content column since it's the same as food_name
merged_df = merged_df.drop(columns=['content'])
return merged_df
doc_embed2=split_emb(df=doc_embed)
print(f"Data shape BEFORE embedding: {food_rows.shape}")
food_rows = merge_foods_embeddings(food_rows, doc_embed2)
print(f"Data shape AFTER embedding: {food_rows.shape}")
# food_rows.info()
# food_rows
Data shape BEFORE embedding: (7713, 21)
Data shape AFTER embedding: (7713, 22)
Each embedding vector is an object so when embedding vectors are merged our dataset gains just one extra feature column.
EXPLORATORY DATA ANALYSIS#
Data Codebook#
The entire USDA Food Nutrition database is too complex to cover here.
food_id: 5-digit NDB number that uniquely identifies a food item (leading zero is lost when this field is defined as numeric)
food_name: 200-character description of food item
food_group: Name of food group, categorized as follows:
- Vegetables and Vegetable Products
- Beef Products
- Lamb, Veal, and Game Products
- Baked Products
- Poultry Products
- Fruits and Fruit Juices
- Pork Products
- Fast Foods
- Finfish and Shellfish Products
- Dairy and Egg Products
- Soups, Sauces, and Gravies
- Legumes and Legume Products
- Beverages
- Cereal Grains and Pasta
- Baby Foods
- Nut and Seed Products
- Sausages and Luncheon Meats
- Sweets
- Snacks
- Restaurant Foods
- American Indian/Alaska Native Foods
- Fats and Oils
- Meals, Entrees, and Side Dishes
- Spices and Herbs
- Breakfast Cereals
Nutrient Name and units of measure:
- Calcium, Ca:['mg']
- Carbohydrate, by difference:['g']
- Cholesterol:['mg']
- Copper, Cu:['mg']
- Fatty acids, total monounsaturated:['g']
- Fatty acids, total polyunsaturated:['g']
- Fatty acids, total saturated:['g']
- Fiber, total dietary:['g']
- Iron, Fe:['mg']
- Linoleic fatty acid:['g']
- Magnesium, Mg:['mg']
- Niacin:['mg']
- Oleic fatty acid:['g']
- Phosphorus, P:['mg']
- Potassium, K:['mg']
- Protein:['g']
- Riboflavin:['mg']
- Sodium, Na:['mg']
- Thiamin:['mg']
- Total lipid (fat):['g']
- Vitamin A, IU:['IU']
- Vitamin B-12:['µg']
- Vitamin B-6:['mg']
- Vitamin C, total ascorbic acid:['mg']
- Zinc, Zn:['mg']
Histograms of Nutrient Content#
def plot_iron_distribution_log(df):
# plt.figure(figsize=(12, 6))
plt.figure(figsize=(15, 5))
iron_values = df['Iron, Fe'].dropna()
# Calculate log-spaced bins, adding epsilon to handle zeros
eps = 1e-3
min_value = max(iron_values.min(), eps) # Ensure minimum is positive
max_value = iron_values.max()
bins = np.logspace(np.log10(min_value), np.log10(max_value), 23)
sns.histplot(data=iron_values,
bins=bins,
color='darkred',
alpha=0.6,
kde=False)
plt.title('Histogram of Measured Iron Content for USDA Tracked Foods', size=14, pad=15)
plt.xlabel('Iron Content (mg/100g)', size=12)
plt.ylabel('Frequency', size=12)
plt.xscale('log')
plt.yscale('log')
stats_text=sdfd = f'Mean: {iron_values.mean():.2f} mg\n'
stats_text += f'Median: {iron_values.median():.2f} mg\n'
stats_text += f'Std Dev: {iron_values.std():.2f} mg'
# Position text box in upper right
plt.text(0.95, 0.95, stats_text,
transform=plt.gca().transAxes,
verticalalignment='top',
horizontalalignment='right',
bbox=dict(boxstyle='round', facecolor='white', alpha=0.8))
plt.tight_layout()
plt.show()
plot_iron_distribution_log(food_rows)

This histogram shows the distribution of iron content across USDA tracked foods, with values displayed on a logarithmic scale.
A few key observations:
The distribution is right-skewed, with most foods containing relatively low iron content (median 1.27 mg/100g)
There’s significant spread in the data (standard deviation 5.42 mg)
The mean (2.47 mg) being higher than the median indicates some foods with very high iron content pulling the average up
Most foods appear to contain between 0.1 and 10 mg of iron per 100g
There are relatively few foods with very high iron content (>20 mg/100g)
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from itertools import product
def plot_nutrient_matrix(df, nutrient_limit=3, food_group_limit=3, figsize=(20, 20)):
# Sort by frequency
nutrient_counts = df['nutrient_name'].value_counts()
food_group_counts = df['food_group'].value_counts()
nutrients = nutrient_counts.index.tolist()
food_groups = food_group_counts.index.tolist()
# Apply limits if specified
if nutrient_limit:
nutrients = nutrients[:nutrient_limit]
if food_group_limit:
food_groups = food_groups[:food_group_limit]
fig, axes = plt.subplots(len(food_groups), len(nutrients),
figsize=figsize,
squeeze=False)
# plt.suptitle('Nutrient Distributions by Food Group',
# size=16, y=0.95)
plt.subplots_adjust(hspace=0.4, wspace=0.4)
for (i, fg), (j, nut) in product(
enumerate(food_groups),
enumerate(nutrients)
):
ax = axes[i, j]
mask = (df['food_group'] == fg) & (df['nutrient_name'] == nut)
values = df[mask]['nutrient_value'].dropna()
unit = df[df['nutrient_name'] == nut]['unit_of_measure'].iloc[0]
if len(values) > 0:
# eps = 1e-3
eps = 0.005
min_value = max(values.min(), eps)
max_value = values.max()
bins = np.logspace(np.log10(min_value), np.log10(max_value), 37)
sns.histplot(data=values,
bins=bins,
color='darkred',
alpha=0.7,
kde=False,
ax=ax)
ax.set_xscale('log')
ax.set_yscale('log')
# ax.set_xlim(1e-3, 1e4)
ax.set_xlim(0.004, 5000)
ax.set_ylim(eps, 1000)
if i != len(food_groups)-1:
ax.set_xlabel('')
if j != 0:
ax.set_ylabel('')
# stats_text = f'n={len(values)}\nmed={values.median():.1f}'
# ax.text(0.95, 0.95, stats_text,
# transform=ax.transAxes,
# verticalalignment='top',
# horizontalalignment='right',
# fontsize=8,
# bbox=dict(facecolor='white',
# alpha=0.8,
# edgecolor='none'))
ax.grid(True, alpha=0.3)
ax.tick_params(labelsize=8)
for ax, col in zip(axes[0], nutrients):
ax.set_title(col, rotation=45, ha='left', size=10)
for ax, row in zip(axes[:,0], food_groups):
ax.set_ylabel(row, rotation=45, ha='right', size=10)
plt.tight_layout()
return fig, axes
# # Usage:
fig, axes = plot_nutrient_matrix(food_nutrient_rows, nutrient_limit=12, food_group_limit=16)
plt.show()

The single histogram above can be placed into a grid of histograms with:
Nutrient types as column headers (shown rotated 45 degrees)
Food group categories as row labels (shown rotated 45 degrees)
Each individual plot contains:
A histogram with darkred semi-transparent bars
Consistent axis scaling and limits across all plots.
Logarithmic scales on both x and y axes
histogram y axis scales by three orders of magnitude
ax.set_ylim(0, 1000)
histogram x axis scales by six orders of magnitude
ax.set_xlim(0.004, 5000)
histogram x axis units do not change inside a column because each nutrient column shows just one nutrient.
histogram x axis units may change across columns because each row shows different nutrients. See “Nutrient Units of Measure” section (above) for how each nutrient is measured.
The grid allows quick visual comparison across different nutrient-food group combinations, with each histogram showing the distribution of values for that specific pairing. Histogram log scales accommodate the wide range of values present in the data. Because each histgram has the same construction (same fixed scaling and bins) the amount of red color in each visually shows how much data is available at the intersection of each nutrient and food_group.
Looking at the patterns in this nutrient distribution visualization, several interesting features stand out:
Distribution Shapes:
Many nutrients show distinct “spikes” or unimodal distributions within specific food groups
Some distributions are broad and spread across multiple orders of magnitude
Certain combinations show multi-modal patterns, suggesting possible subgroups within food categories
Density Patterns:
Some food groups (like dairy products and egg products) show very concentrated distributions for certain nutrients
Others have more diffuse patterns with long tails, indicating wide variability in nutrient content
Empty or nearly empty plots show that some combinations of nutrient food group combinations have lots of missing data
Scale Variations:
The log-scale histograms reveal that nutrient concentrations often span 3-4 orders of magnitude
Some nutrients show remarkably consistent concentration ranges across different food groups
Others vary dramatically between food groups
Possible explanations for these patterns:
Processing Effects:
Sharp peaks might indicate standardized food processing or fortification
Broader distributions could reflect natural variation in unprocessed foods
Biological Constraints:
Some nutrients may have narrow concentration ranges due to biochemical limitations
Others might vary widely based on growing conditions or animal feed
Measurement/Reporting Factors:
Very precise peaks could reflect standardized reporting rather than actual variation
Some patterns might be artifacts of measurement methods or regulatory requirements
Nutrient Content Stripplots#
from matplotlib import transforms
nutrient_id_lookup_df=food_nutrient_rows[['nutrient_id', 'nutrient_name']].drop_duplicates()
def get_nutrient_id(nutrient_name):
result = nutrient_id_lookup_df[nutrient_id_lookup_df['nutrient_name'] == nutrient_name]['nutrient_id'].values
return result[0] if len(result) > 0 else None
def stripplot_nutrient(df, nutrient_name, units_of_measure):
plt.figure(figsize=(15, 4))
ax = plt.gca()
nutrient_id = get_nutrient_id(nutrient_name)
# Add offset to x-coordinates
sns.stripplot(data=df, x='food_group', y=nutrient_name, color='darkred', alpha=0.2, size=3, jitter=0.3)
plt.xticks(rotation=35, ha='right')
# ax.tick_params(axis='x', which='major', pad=-30, labelright=True, labelleft=False, direction='out')
# ax.set_xticklabels(ax.get_xticklabels(), ha='right', va='bottom', position=(7.0, 0.0))
# ax.set_xticklabels(ax.get_xticklabels(), ha='right', va='bottom')
# plt.xticks(rotation=90)
plt.xlabel('')
# plt.ylabel(f'{nutrient_name} Content ({units_of_measure}/100g)')
plt.ylabel(f'{nutrient_name} ({units_of_measure}/100g)')
plt.yscale('log')
# plt.margins(x=0.02)
plt.tight_layout()
plt.show()
def stripplot_nutrients(df):
# Get unique nutrient-unit pairs
nutrient_pairs = food_nutrient_rows[['nutrient_name', 'unit_of_measure']].drop_duplicates() #.head(1)
for _, row in nutrient_pairs.iterrows():
nutrient = row['nutrient_name']
unit = row['unit_of_measure']
# print(f"\nPlotting {nutrient} ({unit})")
stripplot_nutrient(df, nutrient, unit)
# try:
# except Exception as e:
# print(f"Error plotting {nutrient}: {str(e)}")
stripplot_nutrients(food_rows)

















What do stripplots show?#
Stripplots (also known as a jitter plot or dot plot) are used here to show the distribution of nutrients across different food categories from the USDA food database.
We will discuss the Iron strip plot near the middle.
Construction:
The y-axis shows iron (Fe) content in milligrams per 100g of food on a logarithmic scale (note the 10^-2 to 10^2 range)
The x-axis lists various food categories
Each dot represents an individual food item within that category
The dots are spread horizontally within each category’s strip to avoid overlapping (jittering)
The red coloring helps visualize the density of points
What it shows:
Wide variation in iron content both within and between food categories
Some categories like “Baby Foods” and “Meats” show high variability, with items spanning multiple orders of magnitude in iron content
Many categories have a cluster of items in the 1-10 mg/100g range
Some categories (like “Dairy and Egg Products”) tend to have lower iron content
There are some extreme outliers, particularly in categories like “Spices and Herbs” which reach up to 100 mg/100g
The logarithmic scale is particularly important here as it allows visualization of both very small and very large iron contents in the same plot. This type of visualization is useful for nutritionists and food scientists to understand the distribution of iron content across different food types and identify particularly iron-rich or iron-poor categories.
Why the recurring vertical alignment of dots#
Food Processing Standards - Many processed foods are fortified with iron according to standardized amounts set by regulatory agencies. For example, many breakfast cereals and enriched flour products are fortified to meet specific nutritional targets, which would result in multiple products having identical iron content.
Common Iron Sources - Foods that use the same iron-rich ingredient as an additive (like fortified flour or a specific iron compound) would naturally end up with similar iron levels. This is especially common in processed foods from the same manufacturer or category.
Measurement Precision - The data collection method might round measurements to certain significant figures or use standardized testing methods that only measure to a specific precision level, causing different foods to appear to have exactly the same iron content.
Database Estimation - Since this data comes from a nutritional database, some values may be estimated based on similar foods or standard recipes rather than individually measured, leading to identical values being assigned to similar foods.
Serving Size Standardization - When iron content is reported per standard serving size (like per 100g), foods with similar base ingredients but different preparations might end up showing the same iron levels after standardization.
Why are some food groups multimodal#
In “Dairy and Egg Products,” there appear to be at least two distinct clusters: one at a very low iron content (likely milk products, which are notably low in iron) and another at a higher level (likely egg products, which contain more iron). This makes biological sense given the very different nutritional profiles of dairy versus eggs.
The “Spices and Herbs” group shows multiple clusters, which could represent:
Pure spices and dried herbs (typically higher in iron due to concentration)
Spice blends and mixtures (moderate iron levels)
Fresh herbs (lower iron content due to higher water content)
“Breakfast Cereals” also shows clear multimodality, which likely reflects:
Fortified cereals (very high iron content cluster)
Natural/unfortified cereals (lower iron content cluster)
This separation is particularly interesting as it probably represents manufacturer decisions about fortification rather than natural variation.
This multimodality offers valuable insights into both the natural variation in food composition and human interventions like fortification or processing methods.
Nutrient Content Boxplots#
def boxplot_nutrient(df, nutrient_name, units_of_measure):
plt.figure(figsize=(15, 6))
sns.boxplot(data=df, x='food_group', y=nutrient_name, color='lightblue')
plt.xticks(rotation=45, ha='right')
plt.xlabel('USDA Food Group')
plt.ylabel(f'{nutrient_name} ({units_of_measure}/100g)')
# plt.ylabel(f'{nutrient_name} Content ({units_of_measure} per 100g Food Portion)')
plt.yscale('log')
plt.title(f'{nutrient_name} Content in USDA Tracked Foods by USDA Food Group')
plt.tight_layout()
plt.show()
def boxplot_nutrients(df):
# Get unique nutrient-unit pairs
nutrient_pairs = food_nutrient_rows[['nutrient_name', 'unit_of_measure']].drop_duplicates()
for _, row in nutrient_pairs.iterrows():
nutrient = row['nutrient_name']
unit = row['unit_of_measure']
# print(f"\nPlotting {nutrient} ({unit})")
try:
boxplot_nutrient(df, nutrient, unit)
except Exception as e:
print(f"Error plotting {nutrient}: {str(e)}")
boxplot_nutrients(food_rows)

















How the boxplot is worse than the stripplot#
Multimodality - The stripplot reveals clear multimodal distributions (like in breakfast cereals and dairy/eggs) that are completely obscured in the boxplot. Boxplots assume a single central tendency and can’t represent multiple peaks in the distribution.
Point Density - The stripplot shows exactly where data points cluster densely versus sparsely. For example, you can see if there are many samples at a particular iron content level. Boxplots reduce this to just quartiles and outliers, losing the detailed density information.
Gaps in Distribution - The stripplot reveals clear gaps in some food groups where no samples exist, suggesting natural breaks between subgroups of foods. These gaps are invisible in boxplots since they just show continuous ranges.
Sample Size Differences - The stripplot shows the exact number of samples in each food group through the number of points. While boxplots can be modified to show this, traditional boxplots don’t convey sample size information.
Fine Structure of Outliers - The stripplot shows the precise distribution of outlier points, while boxplots typically collapse these into simple whiskers or individual points, losing information about potential patterns or clusters in the outlier region.
Discretization Effects - Some food groups show horizontal “banding” in the stripplot. Many such interesting features are completely hidden in a boxplot representation.
How the boxplot is better than the stripplot#
The boxplot can be misleading if you assume:
The data is unimodal
The distribution is continuous
The data points are evenly distributed between the quartiles
The boxplot does offer several advantages for this particular dataset:
Immediate Visual Summary - With so many food groups and highly variable iron content, the boxplot makes it much easier to quickly compare the median and quartile ranges between groups. The stripplot’s individual points, while more detailed, can make these quick comparisons more challenging.
Outlier Emphasis - Given this dataset has iron content varying across multiple orders of magnitude, the boxplot’s explicit marking of outliers helps identify extreme values more clearly. In the stripplot, these extreme points blend in with the overall distribution.
Scale Readability - The logarithmic scale combined with dense point clouds in the stripplot can make it difficult to read exact values. The boxplot’s clear quartile boxes and median lines make it easier to read approximate values off the y-axis.
Visual Clutter - Some food groups (like Beef Products) have many samples clustered in a small range, creating significant overplotting in the stripplot where points overlap extensively. The boxplot summarizes this dense information more cleanly.
Space Efficiency - The food group labels on the x-axis are quite long, and the boxplot’s narrower format makes these labels more readable compared to the wider space needed for the stripplot’s point spread.
For a complete analysis, having access to both visualization types is ideal, as they complement each other’s strengths and weaknesses.
Foods Notable for their Iron Content#
# Group by food_group and find min and max iron values
iron_summary = food_rows.groupby('food_group').agg({
'Iron, Fe': ['min', 'max']
}).reset_index()
# Get the food names for min and max values in each group
results = []
for group in food_rows['food_group'].unique():
group_df = food_rows[food_rows['food_group'] == group]
# Find max iron food
max_iron = group_df.loc[group_df['Iron, Fe'].idxmax()]
# Find min iron food
min_iron = group_df.loc[group_df['Iron, Fe'].idxmin()]
results.append({
'Food Group': group,
'Highest Iron Food': max_iron['food_name'],
'Highest Iron (mg/100g)': max_iron['Iron, Fe'],
'Lowest Iron Food': min_iron['food_name'],
'Lowest Iron (mg/100g)': min_iron['Iron, Fe']
})
# Create DataFrame from results
summary_df = pd.DataFrame(results)
# Display results
print("\nHighest and Lowest Iron Content Foods by Food Group:")
display(summary_df)
Highest and Lowest Iron Content Foods by Food Group:
Food Group | Highest Iron Food | Highest Iron (mg/100g) | Lowest Iron Food | Lowest Iron (mg/100g) | |
---|---|---|---|---|---|
0 | Dairy and Egg Products | Beverage, instant breakfast powder, chocolate,... | 12.82 | Butter oil, anhydrous | 0.00 |
1 | Spices and Herbs | Spices, thyme, dried | 123.60 | Seasoning mix, dry, sazon, coriander & annatto | 0.00 |
2 | Baby Foods | Babyfood, cereal, oatmeal, with honey, dry | 67.23 | Babyfood, water, bottled, GERBER, without adde... | 0.00 |
3 | Fats and Oils | Butter replacement, without fat, powder | 2.00 | Fat, beef tallow | 0.00 |
4 | Poultry Products | Duck, domesticated, liver, raw | 30.53 | Chicken, broilers or fryers, breast, skinless,... | 0.34 |
5 | Soups, Sauces, and Gravies | Gravy, instant turkey, dry | 9.57 | Soup, HEALTHY CHOICE Chicken Noodle Soup, canned | 0.00 |
6 | Sausages and Luncheon Meats | Braunschweiger (a liver sausage), pork | 11.20 | Dutch brand loaf, chicken, pork and beef | 0.16 |
7 | Breakfast Cereals | Cereals ready-to-eat, RALSTON Enriched Wheat B... | 67.67 | Cereals, WHEATENA, cooked with water | 0.56 |
8 | Snacks | Formulated bar, MARS SNACKFOOD US, SNICKERS MA... | 18.15 | Rice crackers | 0.00 |
9 | Fruits and Fruit Juices | Baobab powder | 8.42 | Pears, asian, raw | 0.00 |
10 | Pork Products | Pork, fresh, variety meats and by-products, li... | 23.30 | Pork, fresh, variety meats and by-products, le... | 0.09 |
11 | Vegetables and Vegetable Products | Seaweed, Canadian Cultivated EMI-TSUNOMATA, dry | 66.38 | Waterchestnuts, chinese, (matai), raw | 0.06 |
12 | Nut and Seed Products | Seeds, sesame butter, paste | 19.20 | Seeds, sisymbrium sp. seeds, whole, dried | 0.11 |
13 | Beef Products | Beef, variety meats and by-products, spleen, raw | 44.55 | Beef, variety meats and by-products, suet, raw | 0.17 |
14 | Beverages | Beverages, UNILEVER, SLIMFAST Shake Mix, high ... | 24.84 | Alcoholic beverage, beer, regular, BUDWEISER | 0.00 |
15 | Finfish and Shellfish Products | Mollusks, clam, mixed species, cooked, breaded... | 13.91 | Fish, wolffish, Atlantic, raw | 0.09 |
16 | Legumes and Legume Products | Peanut butter, chunky, vitamin and mineral for... | 17.50 | SILK Coffee, soymilk | 0.00 |
17 | Lamb, Veal, and Game Products | Lamb, variety meats and by-products, spleen, raw | 41.89 | Veal, breast, separable fat, cooked | 0.45 |
18 | Baked Products | Archway Home Style Cookies, Reduced Fat Ginger... | 12.57 | Leavening agents, baking soda | 0.00 |
19 | Sweets | Cocoa, dry powder, unsweetened, HERSHEY'S Euro... | 36.00 | Topping, SMUCKER'S MAGIC SHELL | 0.00 |
20 | Cereal Grains and Pasta | Rice bran, crude | 18.54 | Rice, white, glutinous, unenriched, cooked | 0.14 |
21 | Fast Foods | BURGER KING, DOUBLE WHOPPER, with cheese | 5.30 | McDONALD'S, Hot Caramel Sundae | 0.08 |
22 | Meals, Entrees, and Side Dishes | Rice mix, cheese flavor, dry mix, unprepared | 4.74 | Rice bowl with chicken, frozen entree, prepare... | 0.35 |
23 | American Indian/Alaska Native Foods | Whale, beluga, meat, dried (Alaska Native) | 72.35 | Oil, beluga, whale (Alaska Native) | 0.00 |
24 | Restaurant Foods | DENNY'S, top sirloin steak | 3.27 | Restaurant, Latino, arroz con leche (rice pudd... | 0.23 |
Observations:
Most Striking Iron Content:
The highest overall iron content is found in dried thyme (Spices and Herbs) at 123.60 mg. Dried thyme contains nearly twice as much iron (123.60 mg) as the next highest food item, demonstrating how concentrated herbs can be an unexpected source of essential nutrients however herbs are typically consumed in small quantities.
Many foods have 0.00 mg iron content, particularly in categories like Beverages, Baked Products, and Fruits
Interesting Contrasts:
In Dairy and Egg Products, there’s a dramatic range from 12.82 mg (instant breakfast powder) to 0.00 mg (Greek yogurt)
Breakfast Cereals show a notable difference between enriched wheat (67.67 mg) and basic cooked WHEATENA (0.56 mg), highlighting the impact of fortification
Organ Meats - Several of the highest iron contents come from organ meats like liver:
Duck liver (30.53 mg)
Braunschweiger liver sausage (11.20 mg)
Spleen in both beef and lamb categories (44.55 mg and 41.89 mg respectively)
Surprising Findings:
Seaweed (Canadian Cultivated EMI-TSUNOMATA) has a remarkably high iron content at 66.38 mg
Fast food items have relatively low iron content even at their highest (BURGER KING WHOPPER at 5.30 mg)
Raw soybeans have a notably high iron content (15.70 mg) compared to other legumes
Traditional vs. Processed:
Many of the highest iron contents come from either unprocessed natural foods (organs, seeds) or fortified processed foods (cereals, instant breakfast powders)
The lowest iron contents often appear in refined or heavily processed foods
DATA QUALITY ASSESSMENT#
Analysis of Missing and Invalid Values#
def analyze_negative_percentages(df):
# Get total rows
total_rows = len(df)
# Select numeric columns, excluding IDs and names
nutrition_cols = [col for col in df.columns if col not in ['food_id', 'food_name', 'food_group','cluster','embedding']]
numeric_cols = df[nutrition_cols].select_dtypes(include=['int64', 'float64']).columns
# Calculate negative value statistics
neg_counts = (df[numeric_cols] < 0).sum()
neg_percentages = (neg_counts / total_rows * 100).round(2)
min_values = df[numeric_cols].min().round(4)
# Create analysis DataFrame
neg_analysis = pd.DataFrame({
'Nutritional Component': neg_percentages.index,
'Negative Count': neg_counts.values,
'Negative Percentage': neg_percentages.values,
'Minimum Value': min_values
})
# Sort and filter to show only columns with negative values
neg_analysis = neg_analysis[neg_analysis['Negative Count'] > 0]
neg_analysis = neg_analysis.sort_values('Negative Percentage', ascending=False)
neg_analysis = neg_analysis.reset_index(drop=True)
# Display results
if len(neg_analysis) > 0:
display(neg_analysis)
print(f"\nFound {len(neg_analysis)} columns with negative values out of {len(numeric_cols)} numeric columns")
else:
print("No negative values found in the dataset")
return
return neg_analysis
analyze_negative_percentages(food_rows)
def analyze_na_percentages(df):
total_rows = len(df)
nutrition_cols = [col for col in df.columns if col not in ['food_id', 'food_name', 'food_group', 'cluster','embedding', 'source_type',]]
na_percentages = (df[nutrition_cols].isna().sum() / total_rows * 100).round(2)
# Check if there are any NA values
if na_percentages.sum() == 0:
print("No NA values found in the dataset")
return
na_analysis = pd.DataFrame({
'Nutritional Component': na_percentages.index,
'NA Percentage': na_percentages.values
})
na_analysis = na_analysis.sort_values('NA Percentage', ascending=False)
na_analysis = na_analysis.reset_index(drop=True)
display(na_analysis)
# print("all food_rows")
# analyze_na_percentages(food_rows)
print("USDA measured food_rows (aka ['source_type'].isin(['1'] )")
analyze_na_percentages(food_rows[food_rows['source_type'].isin(['1'])])
print("USDA estimated food_rows (aka ['source_type'].isin(['4', '7', '8', '9'] ) ")
analyze_na_percentages(food_rows[food_rows['source_type'].isin(['4', '7', '8', '9'])])
No negative values found in the dataset
USDA measured food_rows (aka ['source_type'].isin(['1'] )
Nutritional Component | NA Percentage | |
---|---|---|
0 | Palmitoleic fatty acid | 10.02 |
1 | Oleic fatty acid | 6.13 |
2 | Linoleic fatty acid | 6.09 |
3 | Thiamin | 2.70 |
4 | Riboflavin | 2.68 |
5 | Niacin | 2.67 |
6 | Copper, Cu | 1.98 |
7 | Zinc, Zn | 1.77 |
8 | Magnesium, Mg | 1.53 |
9 | Phosphorus, P | 1.53 |
10 | Potassium, K | 1.15 |
11 | Calcium, Ca | 0.07 |
12 | Sodium, Na | 0.04 |
13 | Ash | 0.00 |
14 | Iron, Fe | 0.00 |
15 | Protein | 0.00 |
16 | Water | 0.00 |
USDA estimated food_rows (aka ['source_type'].isin(['4', '7', '8', '9'] )
Nutritional Component | NA Percentage | |
---|---|---|
0 | Palmitoleic fatty acid | 16.02 |
1 | Oleic fatty acid | 15.64 |
2 | Linoleic fatty acid | 15.64 |
3 | Copper, Cu | 14.14 |
4 | Thiamin | 8.67 |
5 | Phosphorus, P | 8.56 |
6 | Magnesium, Mg | 8.56 |
7 | Niacin | 8.51 |
8 | Zinc, Zn | 8.40 |
9 | Riboflavin | 7.46 |
10 | Potassium, K | 5.64 |
11 | Ash | 0.17 |
12 | Calcium, Ca | 0.11 |
13 | Iron, Fe | 0.00 |
14 | Protein | 0.00 |
15 | Sodium, Na | 0.00 |
16 | Water | 0.00 |
Missing Data Map#
def missing_values_map3(df):
# Create binary matrix of missing values (True/False)
# missing_matrix = df.isna()
missing_matrix = df.loc[:, df.columns.str.startswith('n:')]
cluster_map = sns.clustermap(
data=missing_matrix,
cmap=sns.color_palette(['red', 'yellow', 'green'], as_cmap=True),
xticklabels=True,
yticklabels=False,
# figsize=(30, 20),
# figsize=(30, 12),
# figsize=(32, 16),
figsize=(32, 16),
method='average',
metric='euclidean',
row_cluster=True,
col_cluster=True,
cbar_pos=None # Removes colorbar
)
# Remove dendrograms while keeping clustering
cluster_map.ax_row_dendrogram.set_visible(False)
cluster_map.ax_col_dendrogram.set_visible(False)
plt.xlabel('USDA Tracked Nutrients (148 distinct, clustered)')
plt.ylabel('USDA Tracked Foods (7793 distinct, clustered)')
plt.title('Nutrient Data Availability Clustermap: USDA Measured (green), USDA Assumed (yellow), Missing Data (red)')
# plt.title('Stripplot of Iron Content in Foods by Food Group')
plt.tight_layout()
plt.show()
# missing_values_map(food_rows)
missing_values_map3(data_source_food_rows)
/root/fnana/fnana_venv/lib/python3.10/site-packages/seaborn/matrix.py:560: UserWarning: Clustering large matrix with scipy. Installing `fastcluster` may give better performance.
warnings.warn(msg)
/root/fnana/fnana_venv/lib/python3.10/site-packages/seaborn/matrix.py:560: UserWarning: Clustering large matrix with scipy. Installing `fastcluster` may give better performance.
warnings.warn(msg)

This clustered heatmap visualizes nutrient data availability in the USDA database for 7,793 foods (y-axis) across 148 nutrients (x-axis). The visualization uses color coding: green for measured data (from lab testing), yellow for assumed/calculated data, and red for missing data. Similar patterns in both foods and nutrients are clustered together.
Key findings show systematic data collection patterns:
Core nutrients are well-documented (green) in left/center regions
Complex measurements (fatty acids, specific vitamins, minerals) show more gaps (red), clustered on the right
Estimated values (yellow) appear throughout but concentrate in the center
Missing data often occurs in related nutrient groups across the same foods (horizontal red bands)
The clustering reveals that data completeness likely corresponds to measurement priority, cost, and methodology availability in food analysis.
Using the USDA’s SR-Legacy_Doc.pdf
MEASURED vs ASSUMED are classified using USDA table src_cd
as follows:
SELECT
...
CASE
WHEN sc.src_cd::int IN (1, 6, 12, 13) THEN 'measured'
WHEN sc.src_cd::int IN (4, 7, 9, 8, 11, 5) THEN 'assumed'
END AS data_source
...
FROM
src_cd sc
...
;
Missing Values Histogram#
def analyze_missing_data(df, figsize=(15, 8), save_path=None):
# Calculate missing value statistics
missing_counts = df.isnull().sum()
total_cells = np.prod(df.shape)
missing_cells = missing_counts.sum()
# Calculate percentages
missing_percentages = (missing_counts / len(df) * 100).round(2)
missing_percentages = missing_percentages[missing_percentages > 0].sort_values(ascending=True)
# Create summary statistics
stats = {
'total_missing': int(missing_cells),
'total_cells': total_cells,
'overall_missing_pct': (missing_cells / total_cells * 100),
'missing_by_column': missing_counts.to_dict(),
'missing_pct_by_column': missing_percentages.to_dict()
}
# Create the visualization
if len(missing_percentages) > 0: # Only create plot if there are missing values
plt.figure(figsize=figsize)
# Create missing value percentage plot
ax = missing_percentages.plot(kind='barh')
# Customize the plot
plt.title('Columns missing Values', pad=20)
plt.xlabel('% Missing')
plt.ylabel('Data Column')
# Add percentage labels on the bars
for i, v in enumerate(missing_percentages):
ax.text(v + 0.5, i, f'{v:.1f}%', va='center')
# Adjust layout
plt.tight_layout()
# Create detailed missing value report
missing_report = pd.DataFrame({
'Missing Count': missing_counts,
'Missing Percentage': (missing_counts / len(df) * 100).round(2)
}).sort_values('Missing Percentage', ascending=False)
missing_report = missing_report[missing_report['Missing Count'] > 0]
# Add the report to the stats
stats['missing_report'] = missing_report
return stats
# Analyze missing data
stats = analyze_missing_data(food_rows,
figsize=(15, 6),
save_path='missing_data_analysis.png')
# Print summary
# print(f"\nMissing Data Summary:")
# print(f"Total missing values: {stats['total_missing']:,}")
# print(f"Overall missing percentage: {stats['overall_missing_pct']:.2f}%")
# # Show detailed report
# print("\nDetailed Missing Value Report:")
# print(stats['missing_report'])

How the Missing Data Histogram compares to the Missing Data Map
Better aspects:
Precise Quantification - It shows exact percentages of missing data for each variable (e.g., 9.2% for Oleic fatty acid), which isn’t easily quantifiable from the pattern matrix
Clear Ranking - It orders variables from most to least missing data, making it immediately clear which nutrients have the biggest data quality issues
Easier Comparison - The relative magnitude of missing data between different nutrients is more easily comparable with the bar lengths
Simpler Reading - For stakeholders who just need top-level statistics, this is more accessible than the pattern matrix
Worse aspects:
Loss of Pattern Information - You can’t see if the same foods are missing multiple nutrients simultaneously, which was visible in the pattern matrix
No Row-Level Detail - It obscures whether missing data is randomly distributed across foods or concentrated in specific food types
Hidden Relationships - You can’t identify if related nutrients (like different fatty acids) tend to be missing together
Less Granular - The overall structure of missingness is compressed into a single percentage, losing the detailed pattern information
The two visualizations are best used together - the bar chart for quick insights and overall assessment, and the pattern matrix for deeper investigation of missing data relationships and patterns.
MISSING DATA IMPUTATION#
Food Group Median Imputation#
To deal with missing data we use medians within a food’s food group.
def impute_missing_data1(df, grouped_feature='food_group'):
df_imputed = df.copy()
numerical_cols = df.select_dtypes(include=['float64']).columns
for col in numerical_cols:
medians = df.groupby(grouped_feature)[col].transform('median')
df_imputed[col] = df_imputed[col].fillna(medians)
if df_imputed[col].isna().any():
overall_median = df[col].median()
df_imputed[col] = df_imputed[col].fillna(overall_median)
return df_imputed
Invoke Imputation Strategy#
imputed_food_rows = impute_missing_data1(food_rows, grouped_feature='food_group')
# imputed_food_rows = impute_missing_data2(food_rows)
# imputed_food_rows
# imputed_food_rows.info()
imputed_food_rows.shape
(7713, 22)
Verify Imputation: Invalid Values#
analyze_negative_percentages(imputed_food_rows)
No negative values found in the dataset
Verify Imputation: N/A Values#
analyze_na_percentages(imputed_food_rows)
No NA values found in the dataset
Verify Imputation: Correlations#
def plot_correlation_matrix(df,title):
# Drop non-numeric columns
numeric_df = df.select_dtypes(include=[np.number])
# Calculate correlation matrix (corr() excludes N/A values)
corr_matrix = numeric_df.corr()
# Create mask for upper triangle
mask = np.triu(np.ones_like(corr_matrix, dtype=bool))
plt.figure(figsize=(24/2, 20/2))
# Create heatmap with mask
sns.heatmap(corr_matrix,
mask=mask,
annot=True,
cmap='coolwarm',
center=0,
fmt='.2f',
square=True,
linewidths=0.5,
annot_kws={'size': 7},
cbar_kws={"shrink": .8})
plt.xticks(rotation=45, ha='right', size=8)
plt.yticks(rotation=0, size=8)
plt.title(title, pad=20, size=12)
plt.tight_layout()
plt.show()
plot_correlation_matrix(food_rows,'Correlations of Nutrients BEFORE Imputation')
plot_correlation_matrix(imputed_food_rows,'Correlations of Nutrients AFTER Imputation')


def plot_correlation_matrix_diff(df1, df2):
numeric_df1 = df1.select_dtypes(include=[np.number])
corr_matrix1 = numeric_df1.corr()
numeric_df2 = df2.select_dtypes(include=[np.number])
corr_matrix2 = numeric_df2.corr()
delta = corr_matrix2 - corr_matrix1
# Create mask for upper triangle
mask = np.triu(np.ones_like(delta, dtype=bool))
plt.figure(figsize=(15, 10))
# Create heatmap with mask
sns.heatmap(delta,
mask=mask,
annot=True,
cmap='coolwarm',
center=0,
fmt='.2f',
square=True,
linewidths=0.5,
annot_kws={'size': 7},
cbar_kws={"shrink": .8})
plt.xticks(rotation=45, ha='right', size=8)
plt.yticks(rotation=0, size=8)
plt.title('Difference in Correlation of Nutrients due to Imputation', pad=20, size=12)
plt.tight_layout()
plt.show()
delta_values = delta[mask].values.flatten()
delta_values = delta_values[~np.isnan(delta_values)] # Remove NaN values
# plt.figure(figsize=(24/2, 20/2))
plt.figure(figsize=(12, 5))
sns.histplot(data=delta_values, bins=10, color='skyblue')
plt.title('Histogram of Nutritent Content Correlation Differences', size=10)
plt.xlabel('Correlation Difference due to Imputation')
plt.ylabel('Count')
plt.yscale('log')
plt.tight_layout()
plt.show()
# print(delta[mask])
# TODO add histogram plot of the values from delta[mask]
plot_correlation_matrix_diff(food_rows,imputed_food_rows )


Here is a check to see if imputation of missing data had messed with the correlation relationships. It seems that for iron it has mostly stayed the same with some small positive increases in the correlation.
MORE DATA EXPLORATION#
Once we’ve performed data imputation we can do some more exploration on that data such as clustering food items by their nutrient content and looking at the embedding clusters in comparison to actual labeled food groups
Patterns in Food Nutrient Content#
def nutrient_content_clustermap(df):
"""
Create a clustered heatmap from a pre-processed dataframe.
Parameters:
df (pandas.DataFrame): Pre-pivoted and imputed dataframe with numeric columns
"""
df_log = np.log1p(df) # for contrast at low levels
# df_log = np.log1p(np.log1p(df)) # higher contrast at low levels - for visualization only!
# df_log = np.log1p(np.log1p(np.log1p(df))) # higher contrast at low levels - for visualization only!
# Drop any rows or columns that still have all NaN values
df_log = df_log.dropna(axis=0, how='all')
df_log = df_log.dropna(axis=1, how='all')
# Create the figure and axis
plt.figure(figsize=(15, 6))
# Create clustered heatmap
cluster_map = sns.clustermap(
data=df_log,
cmap='viridis',
# cmap='coolwarm',
xticklabels=True,
yticklabels=False,
figsize=(20, 12),
method='average',
metric='euclidean',
row_cluster=True,
col_cluster=True,
cbar_pos=None # Removes colorbar
)
# Remove dendrograms while keeping clustering
cluster_map.ax_row_dendrogram.set_visible(False)
cluster_map.ax_col_dendrogram.set_visible(False)
plt.xlabel('Nutrients (tracked by USDA)')
plt.ylabel('Foods (tracked by USDA)')
plt.title('Nutrient Content Patterns in Foods\nFoods (clustered rows) x Nutrients (clustered columns) x Relative Amount (log scaled color intensity)')
# Rotate x-axis labels for better readability
# plt.setp(cluster_map.ax_heatmap.get_xticklabels(), rotation=45, ha='right')
# cluster_map.fig.suptitle('Bright colors show which foods (rows) contain lots of which nutrients (column)? \nlog1p(Nutrient Content) with Foods (clustered rows) x Nutrients (clustered columns)',
# cluster_map.fig.suptitle('Which groups of nutrients co-occur together in foods?\nlog1p(Nutrient Content) with Foods (clustered rows) x Nutrients (clustered columns)',
# cluster_map.fig.suptitle('Which groups of nutrients co-occur together in foods?\nNutrient Content (color) x Foods (clustered rows) x Nutrients (clustered columns)',
# cluster_map.fig.suptitle('Which groups of nutrients co-occur together in foods?\nFoods (clustered rows) x Nutrients (clustered columns) x Nutrient Content (color)',
# y=1.02,
# fontsize=16)
# Adjust layout to prevent label cutoff
plt.tight_layout()
# Save the plot
# plt.savefig('nutrient_heatmap3.png',
# dpi=300,
# bbox_inches='tight')
plt.show()
nutrient_content_clustermap(imputed_food_rows.select_dtypes('float64'))
# nutrient_content_clustermap(imputed_food_rows.select_dtypes('float64'))
/root/fnana/fnana_venv/lib/python3.10/site-packages/seaborn/matrix.py:560: UserWarning: Clustering large matrix with scipy. Installing `fastcluster` may give better performance.
warnings.warn(msg)
/root/fnana/fnana_venv/lib/python3.10/site-packages/seaborn/matrix.py:560: UserWarning: Clustering large matrix with scipy. Installing `fastcluster` may give better performance.
warnings.warn(msg)
<Figure size 1500x600 with 0 Axes>

We see a heatmap visualization of nutrient content patterns across different foods, using data tracked by the USDA.
The visualization presents a clustered analysis where:
Foods are represented as rows (clustered)
Nutrients are shown as columns (also clustered)
The relative amount of each nutrient is indicated by color intensity (using a log-scaled color scheme from purple to green/blue)
The nutrients shown include:
Essential minerals: Iron, Zinc, Calcium, Magnesium
Vitamins: Riboflavin, Thiamin, Niacin
Macronutrients: Protein, Phosphorus, Potassium
Fatty acids: Palmitoleic fatty acid, Linoleic fatty acid, Oleic fatty acid
Other nutrients: Sodium, Water, Copper, Ash
The brighter turquoise/green areas indicate higher concentrations of certain nutrients, while the darker purple areas indicate lower concentrations. There appear to be distinct patterns where certain foods cluster together based on their nutrient profiles.
Looking at the heatmap, there are several patterns that align with what is known about food composition:
Mineral Grouping Pattern: There’s a notable cluster of minerals (Na, P, K, Ca, Mg) that often appear together in similar concentrations in foods. This makes biological sense since these minerals are often found together in plant tissues and animal muscle, where they play crucial roles in cellular function and structure.
Fat-Soluble Nutrient Association: The fatty acids (Palmitoleic, Linoleic, and Oleic) show strong correlations with each other - when one is present, the others often are too. This pattern likely exists because these fatty acids are commonly found together in fat-rich foods like oils, nuts, and fatty fish. We can see this in the consistent banding patterns on the left side of the heatmap.
Water-Protein Relationship: There appears to be an interesting relationship between water content and protein - many foods high in one tend to be high in the other. This might reflect the high water content of fresh, protein-rich foods like meats, legumes, and certain vegetables.
Vitamin Clustering: The B vitamins (Riboflavin, Thiamin, Niacin) show similar distribution patterns across foods. This makes sense evolutionarily, as these vitamins often work together in metabolic processes, so organisms tend to concentrate them together in tissues.
Inverse Relationships: There appear to be some inverse relationships - when water content is high (bright turquoise), fatty acid content tends to be low (dark purple). This is logical since water and fat don’t mix, and foods tend to be either water-rich (like fruits and vegetables) or fat-rich (like oils and nuts).
Distinct Food Group Patterns: You can see distinct banding patterns that likely correspond to different food groups:
Some rows show high mineral content but low fat content (possibly vegetables and legumes)
Others show high fat content but lower mineral content (possibly oils and fats)
Some show moderate levels across many nutrients (possibly whole grains and meats)
These patterns reflect both the biological functions of these nutrients in the organisms we eat (plants and animals) and the evolutionary pressures that led to certain nutrients being concentrated together in specific tissues. They also reflect the chemical properties of these nutrients - how they’re stored, transported, and utilized in biological systems.
Embeddings for Better Food Groups#
from sklearn.manifold import TSNE
def plot_food_name_embedding_by_food_group_tsne(foods_embeddings, perplexity=30, random_state=42):
"""
Create a t-SNE visualization of food embeddings colored by food group,
using cosine similarity (vector angles) instead of Euclidean distance
Parameters:
foods_embeddings (pd.DataFrame): DataFrame containing 'food_name', 'food_group', and 'embedding' columns
where 'embedding' contains numpy float32 arrays
perplexity (int): t-SNE perplexity parameter
random_state (int): Random seed for reproducibility
"""
# Stack embeddings into a 2D numpy array
embeddings = np.vstack(foods_embeddings['embedding'].values)
# Verify dtype is float32
if embeddings.dtype != np.float32:
embeddings = embeddings.astype(np.float32)
# Normalize the vectors to unit length for cosine similarity
norms = np.linalg.norm(embeddings, axis=1, keepdims=True)
normalized_embeddings = embeddings / norms
# Create and fit t-SNE with cosine metric
tsne = TSNE(n_components=2,
perplexity=perplexity,
random_state=random_state,
init='random',
learning_rate='auto',
metric='cosine') # important for embedding vectors!
tsne_results = tsne.fit_transform(normalized_embeddings)
# Create DataFrame for plotting
plot_df = pd.DataFrame({
'x': tsne_results[:, 0],
'y': tsne_results[:, 1],
'food_group': foods_embeddings['food_group']
})
# Set up the plot style
plt.figure(figsize=(15, 10))
sns.set_style("whitegrid")
# Create color palette for food groups
unique_groups = plot_df['food_group'].unique()
color_palette = sns.color_palette("husl", n_colors=len(unique_groups))
color_dict = dict(zip(unique_groups, color_palette))
# Create scatter plot
for group in unique_groups:
mask = plot_df['food_group'] == group
group_data = plot_df[mask]
if len(group_data) > 0: # Only plot if group has data
plt.scatter(group_data['x'],
group_data['y'],
alpha=0.6,
c=[color_dict[group]],
label=group)
plt.title('t-SNE Projection of Food Name Embeddings', fontsize=14, pad=20)
plt.xlabel('t-SNE Component 1')
plt.ylabel('t-SNE Component 2')
plt.legend(bbox_to_anchor=(1.05, 1),
loc='upper left',
borderaxespad=0.,
title='Food Group')
plt.tight_layout()
plt.show()
plot_food_name_embedding_by_food_group_tsne(food_rows)

This t-SNE (t-Distributed Stochastic Neighbor Embedding) visualization shows how food items cluster based on their semantic relationships in embedding space. Each point represents a food item, colored by its category (e.g., seafood in teal, cereals in green, dairy in pink).
The plot illustrates that OpenAI’s text embeddings capture meaningful relationships, as similar food categories form distinct clusters. The t-SNE algorithm reduces the original high-dimensional embeddings (likely 1536 dimensions) to 2 dimensions while preserving relative distances, making these semantic relationships visible.
The clustering suggests that these embeddings encode useful information about food categories, which could improve models predicting nutrient content. Foods in the same cluster likely share similar nutritional properties, making embedding coordinates valuable features for nutrient prediction.
This highlights how language model embeddings can automatically group foods semantically, potentially enhancing nutrient modeling without manual classification.
FEATURE ENGINEERING#
Food Name Embedding Vectors#
def append_food_name_embedding(df, truncate_dims=8, column_name='embedding'):
"""
Process embeddings by truncating and normalizing them, then append to dataframe.
Normalization Logic: https://platform.openai.com/docs/guides/embeddings
"""
def normalize_l2(x):
x = np.array(x)
if x.ndim == 1:
norm = np.linalg.norm(x)
if norm == 0:
return x
return x / norm
else:
norm = np.linalg.norm(x, 2, axis=1, keepdims=True)
return np.where(norm == 0, x, x / norm)
# Create copy of input dataframe
result_df = df.copy()
# Process embeddings
if truncate_dims is not None:
# Truncate then normalize each embedding
processed_embeddings = [normalize_l2(emb[:truncate_dims]) for emb in df[column_name]]
embedding_dim = truncate_dims
else:
# If no truncation, normalize full embeddings
processed_embeddings = [normalize_l2(emb) for emb in df[column_name]]
embedding_dim = len(df[column_name].iloc[0])
# Convert processed embeddings to dataframe
embedding_df = pd.DataFrame(
np.vstack(processed_embeddings),
columns=[f'embed_{i}' for i in range(embedding_dim)]
)
# Concatenate with original dataframe
result_df = pd.concat([result_df, embedding_df], axis=1)
return result_df
imputed_food_rows = append_food_name_embedding(imputed_food_rows)
type(imputed_food_rows)
# imputed_measured_food_nutrient_rows3
# pd.set_option('display.max_info_columns', 200)
imputed_food_rows.info()
# pd.reset_option('display.max_info_columns')
imputed_food_rows.shape
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 7713 entries, 0 to 7712
Data columns (total 30 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 food_id 7713 non-null object
1 food_name 7713 non-null object
2 food_group 7713 non-null object
3 source_type 7713 non-null object
4 Ash 7713 non-null float64
5 Calcium, Ca 7713 non-null float64
6 Copper, Cu 7713 non-null float64
7 Iron, Fe 7713 non-null float64
8 Linoleic fatty acid 7713 non-null float64
9 Magnesium, Mg 7713 non-null float64
10 Niacin 7713 non-null float64
11 Oleic fatty acid 7713 non-null float64
12 Palmitoleic fatty acid 7713 non-null float64
13 Phosphorus, P 7713 non-null float64
14 Potassium, K 7713 non-null float64
15 Protein 7713 non-null float64
16 Riboflavin 7713 non-null float64
17 Sodium, Na 7713 non-null float64
18 Thiamin 7713 non-null float64
19 Water 7713 non-null float64
20 Zinc, Zn 7713 non-null float64
21 embedding 7713 non-null object
22 embed_0 7713 non-null float32
23 embed_1 7713 non-null float32
24 embed_2 7713 non-null float32
25 embed_3 7713 non-null float32
26 embed_4 7713 non-null float32
27 embed_5 7713 non-null float32
28 embed_6 7713 non-null float32
29 embed_7 7713 non-null float32
dtypes: float32(8), float64(17), object(5)
memory usage: 1.5+ MB
(7713, 30)
Food Name Embedding Clusters#
Now we will create more granular food categories by applying clustering to food embeddings within each USDA food group.
Process:
Takes a DataFrame with food embeddings and USDA food group labels
For each food group:
Extracts embeddings for foods in that group
Uses K-means clustering to find optimal subgroups (up to 32 clusters)
Assigns each food a new cluster label combining food group and cluster number
Uses silhouette scores to determine optimal number of clusters
Prints sample foods from each cluster to show the semantic groupings
The result is a more precise food categorization system that captures nuanced relationships between foods based on their semantic embeddings. For example, instead of just “Vegetables”, you might get clusters for “leafy greens”, “root vegetables”, etc.
We hope that these refined categories can then serve as better predictive features for nutrient content modeling.
from EmbeddingClusterer_0021 import EmbeddingClusterer
import numpy as np
import pandas as pd
def append_semantic_embedding_clusters(df):
"""
Perform semantic embedding clustering on each food group separately and save results to DataFrame column called cluster
Args:
df: DataFrame with 'food_group' and 'embedding' columns
Returns:
DataFrame with added cluster columns prefixed by food group
"""
result_df = df.copy()
CLUSTER_LIMIT = 32
# Get unique food groups
food_groups = df['food_group'].unique()
column_name = f"cluster"
# Initialize cluster column with -1 (no cluster)
result_df[column_name] = -1
for group in food_groups:
# Filter for current food group
group_mask = df['food_group'] == group
group_df = df[group_mask].copy()
if len(group_df) < 2:
continue
# Extract embeddings
embeddings = np.stack(group_df['embedding'].values)
# Initialize clusterer
clusterer = EmbeddingClusterer(embeddings)
# Find optimal clusters
optimal_clusters, silhouette = clusterer.find_optimal_clusters(max_clusters=min(CLUSTER_LIMIT, len(group_df)-1))
print(f"\n{group}:")
print(f"Number of items: {len(group_df)}")
print(f"Optimal clusters: {optimal_clusters}")
print(f"Silhouette score: {silhouette:.3f}")
# Perform clustering
kmeans_results = clusterer.cluster_kmeans(n_clusters=optimal_clusters)
# Update cluster assignments for current food group
# result_df.loc[group_mask, column_name] = [f"{group}_{label}" for label in kmeans_results['labels']]
# result_df.loc[group_mask, column_name] = pd.Series([f"{group}_{label}" for label in kmeans_results['labels']], dtype='object')
# First convert the column to object type before assignment
result_df[column_name] = result_df[column_name].astype('object')
# Then do the assignment
result_df.loc[group_mask, column_name] = pd.Series([f"{group}_{label}" for label in kmeans_results['labels']], dtype='object')
# Print sample foods from each cluster
print(f"\nSample foods from each {group} cluster:")
group_df['cluster'] = kmeans_results['labels']
for cluster in range(optimal_clusters):
sample_foods = group_df[group_df['cluster'] == cluster]['food_name'].sample(
min(5, sum(kmeans_results['labels'] == cluster))
)
print(f"\nCluster {cluster}:")
print(sample_foods.values)
return result_df
print(f"before append_semantic_embedding_clusters() imputed_food_rows {imputed_food_rows.shape}")
imputed_food_rows = append_semantic_embedding_clusters(imputed_food_rows)
print(f"after append_semantic_embedding_clusters() imputed_food_rows {imputed_food_rows.shape}")
print(f"imputed_food_rows['cluster'].unique().shape {imputed_food_rows['cluster'].unique().shape}")
/root/fnana/fnana_venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
2024-12-09 04:59:14.681404: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720354.693658 2624069 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720354.697237 2624069 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 04:59:14.708954: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
before append_semantic_embedding_clusters() imputed_food_rows (7713, 30)
Dairy and Egg Products:
Number of items: 291
Optimal clusters: 6
Silhouette score: 0.194
Sample foods from each Dairy and Egg Products cluster:
Cluster 0:
['Milk, lowfat, fluid, 1% milkfat, with added vitamin A and vitamin D'
'Milk, canned, condensed, sweetened'
'Milk, chocolate, lowfat, reduced sugar'
'Milk, filled, fluid, with lauric acid oil'
'Milk, dry, nonfat, instant, with added vitamin A and vitamin D']
Cluster 1:
['Cheese, ricotta, whole milk' 'Cheese, goat, hard type'
'KRAFT CHEEZ WHIZ Pasteurized Process Cheese Sauce'
'Cheese, Swiss, nonfat or fat free' 'Cheese, mexican, queso anejo']
Cluster 2:
['Cream substitute, liquid, with lauric acid oil and sodium caseinate'
'Cream, whipped, cream topping, pressurized'
"KRAFT BREAKSTONE'S Reduced Fat Sour Cream"
'Cream, half and half, lowfat' 'Cream, sour, cultured']
Cluster 3:
['Yogurt, Greek, vanilla, lowfat' 'Yogurt, Greek, nonfat, peach, CHOBANI'
'Yogurt, Greek, 2% fat, key lime blend, CHOBANI'
'Yogurt, Greek, 2%fat, coconut blend, CHOBANI'
'Yogurt, frozen, flavors not chocolate, nonfat milk, with low-calorie sweetener']
Cluster 4:
["Egg, whole, raw, frozen, pasteurized (Includes foods for USDA's Food Distribution Program)"
'Egg, goose, whole, fresh, raw' 'Egg, whole, raw, fresh'
'Egg, whole, raw, frozen, salted, pasteurized'
'Egg, quail, whole, fresh, raw']
Cluster 5:
['Beverage, instant breakfast powder, chocolate, not reconstituted'
'Ice cream, light, soft serve, chocolate'
'Beverage, instant breakfast powder, chocolate, sugar-free, not reconstituted'
'Ice cream sandwich' 'Light ice cream, Creamsicle']
Spices and Herbs:
Number of items: 63
Optimal clusters: 2
Silhouette score: 0.218
Sample foods from each Spices and Herbs cluster:
Cluster 0:
['Mustard, prepared, yellow' 'Spices, sage, ground' 'Peppermint, fresh'
'Spices, dill seed' 'Spices, poppy seed']
Cluster 1:
['Vanilla extract' 'Vinegar, cider' 'Vinegar, balsamic'
'Vinegar, red wine' 'Vanilla extract, imitation, no alcohol']
Baby Foods:
Number of items: 345
Optimal clusters: 2
Silhouette score: 0.260
Sample foods from each Baby Foods cluster:
Cluster 0:
['Babyfood, juice, orange and apple and banana'
'Babyfood, banana juice with low fat yogurt'
'Babyfood, juice, apple and peach' 'Babyfood, meat, lamb, strained'
'Babyfood, cookie, baby, fruit']
Cluster 1:
['Infant formula, PBM PRODUCTS, store brand, soy, ready-to-feed'
'Infant formula, NESTLE, GOOD START SOY, with ARA and DHA, powder'
'Infant formula, PBM PRODUCTS, store brand, ready-to-feed'
'Infant formula, PBM PRODUCTS, store brand, powder'
'Infant formula, NESTLE, GOOD START SUPREME, with iron, DHA and ARA, prepared from liquid concentrate']
Fats and Oils:
Number of items: 207
Optimal clusters: 3
Silhouette score: 0.208
Sample foods from each Fats and Oils cluster:
Cluster 0:
['Oil, coconut' 'Oil, industrial, soy, low linolenic' 'Oil, almond'
'Oil, PAM cooking spray, original'
'Oil, industrial, palm kernel (hydrogenated), confection fat, uses similar to 95 degree hard butter']
Cluster 1:
['Margarine-like, vegetable oil spread, 60% fat, stick/tub/bottle, with salt'
'Lard'
'Margarine-like, butter-margarine blend, 80% fat, stick, without salt'
'Margarine-like, vegetable oil spread, 60% fat, stick/tub/bottle, without salt'
'Margarine-like spread with yogurt, 70% fat, stick, with salt']
Cluster 2:
['Salad dressing, mayonnaise, imitation, soybean without cholesterol'
'Salad dressing, mayonnaise, regular'
'Salad dressing, french dressing, commercial, regular'
'Salad dressing, KRAFT MIRACLE WHIP FREE Nonfat Dressing'
'Salad dressing, blue or roquefort cheese, low calorie']
Poultry Products:
Number of items: 383
Optimal clusters: 3
Silhouette score: 0.210
Sample foods from each Poultry Products cluster:
Cluster 0:
['Ruffed Grouse, breast meat, skinless, raw'
'Pheasant, breast, meat only, raw' 'Ostrich, fan, raw'
'Goose, domesticated, meat only, raw' 'Ostrich, inside leg, raw']
Cluster 1:
['Turkey, drumstick, from whole bird, meat only, with added solution, roasted'
'Turkey, whole, skin (light and dark), roasted'
'Turkey, ground, fat free, patties, broiled'
'Turkey, whole, light meat, raw'
'Turkey, drumstick, from whole bird, meat only, with added solution, raw']
Cluster 2:
['Chicken breast tenders, breaded, cooked, microwaved'
'Chicken, broilers or fryers, meat only, raw'
'Chicken, dark meat, drumstick, meat and skin, with added solution, cooked, braised'
'Chicken, broilers or fryers, light meat, meat only, cooked, roasted'
'Chicken, dark meat, thigh, meat and skin, with added solution, cooked, roasted']
Soups, Sauces, and Gravies:
Number of items: 252
Optimal clusters: 2
Silhouette score: 0.226
Sample foods from each Soups, Sauces, and Gravies cluster:
Cluster 0:
['Soup, pea, green, canned, prepared with equal volume milk'
'Soup, chili beef, canned, prepared with equal volume water'
'Soup, cream of mushroom, canned, prepared with equal volume low fat (2%) milk'
'Soup, cream of shrimp, canned, prepared with equal volume low fat (2%) milk'
'Soup, tomato, dry, mix, prepared with water']
Cluster 1:
['Sauce, barbecue, KC MASTERPIECE, original' 'Dip, bean, original flavor'
'Sauce, pesto, MEZZETTA, NAPA VALLEY BISTRO, basil pesto, ready-to-serve'
"Sauce, barbecue, BULL'S-EYE, original"
'Sauce, barbecue, OPEN PIT, original']
Sausages and Luncheon Meats:
Number of items: 166
Optimal clusters: 16
Silhouette score: 0.175
Sample foods from each Sausages and Luncheon Meats cluster:
Cluster 0:
['Sausage, summer, pork and beef, sticks, with cheddar cheese'
'Sausage, Vienna, canned, chicken, beef, pork'
'Sausage, Italian, pork, mild, cooked, pan-fried'
'Sausage, Italian, pork, mild, raw'
'Sausage, chicken, beef, pork, skinless, smoked']
Cluster 1:
['Bologna, beef and pork, low fat' 'Bologna, chicken, pork, beef'
'Bologna, pork and turkey, lite' 'Bologna, beef, low fat'
'Bologna, turkey']
Cluster 2:
['Ham, honey, smoked, cooked'
'Ham, turkey, sliced, extra lean, prepackaged or deli'
'Ham, chopped, canned' 'Ham, minced'
'Ham, smoked, extra lean, low sodium']
Cluster 3:
['Frankfurter, low sodium' 'Frankfurter, meat and poultry, low fat'
'Frankfurter, meat, heated' 'Frankfurter, chicken'
'Frankfurter, meat and poultry, unheated']
Cluster 4:
['Luncheon meat, pork and chicken, minced, canned, includes Spam Lite'
'Luncheon meat, pork with ham, minced, canned, includes Spam (Hormel)'
'Luncheon meat, pork, ham, and chicken, minced, canned, reduced sodium, added ascorbic acid, includes SPAM, 25% less sodium'
'Luncheon sausage, pork and beef' 'Luncheon meat, pork, canned']
Cluster 5:
['Bratwurst, pork, beef and turkey, lite, smoked'
'Knackwurst, knockwurst, pork, beef' 'Beerwurst, beer salami, pork'
'Bratwurst, veal, cooked' 'Beerwurst, beer salami, pork and beef']
Cluster 6:
['Meatballs, frozen, Italian style' 'Salami, dry or hard, pork'
'Salami, cooked, beef' 'Sandwich spread, pork, beef'
'Salami, Italian, pork']
Cluster 7:
['Sausage, pork and beef, fresh, cooked'
'Sausage, pork and turkey, pre-cooked' 'Sausage, turkey, fresh, cooked'
'Sausage, breakfast sausage, beef, pre-cooked, unprepared'
'Sausage, turkey and pork, fresh, bulk, patty or link, cooked']
Cluster 8:
['Liverwurst spread' 'Roast beef spread' 'Chicken spread'
'Ham salad spread' 'Ham and cheese spread']
Cluster 9:
['Chicken breast, oven-roasted, fat-free, sliced'
'Turkey breast, low salt, prepackaged or deli, luncheon meat'
'Roast beef, deli style, prepackaged, sliced'
'Turkey, white, rotisserie, deli cut'
'Turkey breast, sliced, prepackaged']
Cluster 10:
['Oscar Mayer, Wieners (beef franks)'
'Oscar Mayer, Ham (chopped with natural juice)'
'Oscar Mayer, Smokies Sausage Little Cheese (pork, turkey)'
'Oscar Mayer, Chicken Breast (honey glazed)' 'Oscar Mayer, Salami (hard)']
Cluster 11:
['Pate, chicken liver, canned' 'Pate, goose liver, smoked, canned'
'Pate, truffle flavor' 'Pate, liver, not specified, canned']
Cluster 12:
['Pickle and pimiento loaf, pork' 'Scrapple, pork'
'Peppered loaf, pork, beef' 'Olive loaf, pork' 'Picnic loaf, pork, beef']
Cluster 13:
['Beef, cured, corned beef, canned' 'Beef, cured, luncheon meat, jellied'
'Beef, chopped, cured, smoked' 'Beef, cured, dried'
'Beef, cured, pastrami']
Cluster 14:
['Pastrami, beef, 98% fat-free'
'Sausage, pork, turkey, and beef, reduced sodium'
'Hormel Pillow Pak Sliced Turkey Pepperoni' 'Bacon, turkey, low sodium'
'Bacon, turkey, unprepared']
Cluster 15:
['Pork sausage, link/patty, fully cooked, unheated'
'Pork sausage, link/patty, fully cooked, microwaved'
'Pork sausage rice links, brown and serve, cooked'
'Pork sausage, reduced sodium, cooked'
'Pork sausage, link/patty, cooked, pan-fried']
Breakfast Cereals:
Number of items: 195
Optimal clusters: 30
Silhouette score: 0.147
Sample foods from each Breakfast Cereals cluster:
Cluster 0:
['Cereals, QUAKER, Quick Oats with Iron, Dry'
"Cereals, QUAKER, Oat Bran, QUAKER/MOTHER'S Oat Bran, dry"
'Cereals, QUAKER, QUAKER MultiGrain Oatmeal, prepared with water, salt'
'Cereals, QUAKER, QUAKER MultiGrain Oatmeal, dry'
"Cereals, QUAKER, Oat Bran, QUAKER/MOTHER'S Oat Bran, prepared with water, no salt"]
Cluster 1:
['Cereals ready-to-eat, MALT-O-MEAL, HONEY GRAHAM SQUARES'
"Cereals ready-to-eat, MOM'S BEST, Honey Nut TOASTY O'S"
'Cereals ready-to-eat, MALT-O-MEAL, Blueberry MUFFIN TOPS Cereal'
'Cereals ready-to-eat, MALT-O-MEAL, Raisin Bran Cereal']
Cluster 2:
['Cereals ready-to-eat, POST, GREAT GRAINS, Raisin, Date & Pecan'
'Cereals ready-to-eat, POST GREAT GRAINS Banana Nut Crunch'
'Cereals ready-to-eat, POST, GOLDEN CRISP'
'Cereals ready-to-eat, POST Raisin Bran Cereal'
'Cereals ready-to-eat, POST, GRAPE-NUTS Flakes']
Cluster 3:
['Cereals, farina, enriched, assorted brands including CREAM OF WHEAT, quick (1-3 minutes), dry'
'Cereals, CREAM OF WHEAT, instant, dry'
'Cereals, farina, enriched, assorted brands including CREAM OF WHEAT, quick (1-3 minutes), cooked with water, without salt'
'Cereals, CREAM OF WHEAT, instant, prepared with water, without salt'
'Cereals, CREAM OF RICE, dry']
Cluster 4:
['Cereals, corn grits, white, regular and quick, enriched, dry'
'Cereals, corn grits, white, regular and quick, enriched, cooked with water, without salt'
'Cereals, corn grits, white, regular and quick, enriched, cooked with water, with salt'
'Cereals, corn grits, yellow, regular and quick, unenriched, dry'
'Cereals, corn grits, yellow, regular, quick, enriched, cooked with water, with salt']
Cluster 5:
['Cereals ready-to-eat, wheat, puffed, fortified'
'Cereals ready-to-eat, chocolate-flavored frosted puffed corn'
'Cereals ready-to-eat, FAMILIA' 'Cereals ready-to-eat, granola, homemade'
'Cereals ready-to-eat, wheat and bran, presweetened with nuts and fruits']
Cluster 6:
['Cereals ready-to-eat, QUAKER, Shredded Wheat, bagged cereal']
Cluster 7:
['Cereals ready-to-eat, QUAKER, Maple Brown Sugar LIFE Cereal'
'Cereals ready-to-eat, QUAKER, QUAKER Honey Graham LIFE Cereal'
'Cereals ready-to-eat, QUAKER, QUAKER OAT CINNAMON LIFE'
'Cereals ready-to-eat, QUAKER, QUAKER OAT LIFE, plain']
Cluster 8:
['Cereals, QUAKER, corn grits, instant, plain, dry'
'Cereals, QUAKER, hominy grits, white, regular, dry'
'Cereals, QUAKER, hominy grits, white, quick, dry']
Cluster 9:
['Cereals ready-to-eat, RALSTON Corn Flakes'
'Cereals ready-to-eat, RALSTON CRISP RICE'
'Cereals ready-to-eat, RALSTON TASTEEOS'
'Cereals ready-to-eat, RALSTON Crispy Hexagons'
'Cereals ready-to-eat, RALSTON Corn Biscuits']
Cluster 10:
['Cereals, oats, instant, fortified, maple and brown sugar, dry'
'Cereals, oats, instant, fortified, with raisins and spice, prepared with water'
'Cereals, oats, instant, fortified, with cinnamon and spice, prepared with water'
'Cereals, oats, instant, fortified, with cinnamon and spice, dry'
'Cereals, oats, instant, fortified, plain, dry']
Cluster 11:
['Cereals, QUAKER, oatmeal, REAL MEDLEYS, peach almond, dry'
'Cereals, QUAKER, oatmeal, REAL MEDLEYS, summer berry, dry'
'Cereals, QUAKER, oatmeal, REAL MEDLEYS, blueberry hazelnut, dry'
'Cereals, QUAKER, oatmeal, REAL MEDLEYS, cherry pistachio, dry'
'Cereals, QUAKER, oatmeal, REAL MEDLEYS, apple walnut, dry']
Cluster 12:
['Cereals ready-to-eat, POST, Shredded Wheat, original spoon-size'
'Cereals ready-to-eat, POST, Shredded Wheat, lightly frosted, spoon-size'
'Cereals ready-to-eat, POST, Honey Nut Shredded Wheat'
"Cereals ready-to-eat, POST, Shredded Wheat n' Bran, spoon-size"
'Cereals ready-to-eat, POST, Shredded Wheat, original big biscuit']
Cluster 13:
['Cereals ready-to-eat, POST, HONEY BUNCHES OF OATS, pecan bunches'
'Cereals ready-to-eat, POST, HONEY BUNCHES OF OATS, with almonds'
'Cereals ready-to-eat, POST, HONEY BUNCHES OF OATS with vanilla bunches'
'Cereals ready-to-eat, POST, HONEY BUNCHES OF OATS, honey roasted'
'Cereals ready-to-eat, POST HONEY BUNCHES OF OATS with cinnamon bunches']
Cluster 14:
["Cereals ready-to-eat, MOM'S BEST, Sweetened WHEAT-FULS"
'Cereals ready-to-eat, QUAKER, SWEET CRUNCH/QUISP']
Cluster 15:
['Cereals, CREAM OF WHEAT, 1 minute cook time, cooked with water, stove-top, without salt'
'Cereals, CREAM OF WHEAT, regular (10 minute), cooked with water, with salt'
'Cereals, CREAM OF WHEAT, regular (10 minute), cooked with water, without salt'
'Cereals, CREAM OF WHEAT, regular, 10 minute cooking, dry'
'Cereals, CREAM OF WHEAT, 1 minute cook time, dry']
Cluster 16:
['Cereals ready-to-eat, QUAKER, Oatmeal Squares, cinnamon'
"Cereals ready-to-eat, QUAKER, MOTHER'S Toasted Oat Bran cereal"
'Cereals ready-to-eat, QUAKER, QUAKER 100% Natural Granola with Oats, Wheat, Honey, and Raisins'
'Cereals ready-to-eat, QUAKER, Natural Granola Apple Cranberry Almond'
'Cereals ready-to-eat, QUAKER, Low Fat 100% Natural Granola with Raisins']
Cluster 17:
['Cereals, whole wheat hot natural cereal, cooked with water, with salt'
'Cereals, whole wheat hot natural cereal, dry'
'Cereals, whole wheat hot natural cereal, cooked with water, without salt']
Cluster 18:
["Cereals ready-to-eat, QUAKER, MOTHER'S COCOA BUMPERS"
"Cereals ready-to-eat, QUAKER, MOTHER'S PEANUT BUTTER BUMPERS Cereal"
"Cereals ready-to-eat, QUAKER, MOTHER'S GRAHAM BUMPERS"]
Cluster 19:
['Cereals, QUAKER, Instant Oatmeal, raisins, dates and walnuts, dry'
'Cereals, QUAKER, Instant Oatmeal, fruit and cream, variety of flavors, reduced sugar'
'Cereals, QUAKER, Instant Oatmeal, Banana Bread, dry'
'Cereals, QUAKER, Instant Oatmeal, DINOSAUR EGGS, Brown Sugar, dry'
'Cereals, QUAKER, Instant Oatmeal, fruit and cream variety, dry']
Cluster 20:
["Cereals, QUAKER, Instant Grits, Ham 'n' Cheese flavor, dry"
'Cereals, QUAKER, Instant Grits, Butter flavor, dry'
'Cereals, QUAKER, Instant Grits, Redeye Gravy & Country Ham flavor, dry'
'Cereals, QUAKER, corn grits, instant, cheddar cheese flavor, dry'
'Cereals, QUAKER, Instant Grits, Country Bacon flavor, dry']
Cluster 21:
['Cereals ready-to-eat, MALT-O-MEAL, Fruity DYNO-BITES'
'Cereals ready-to-eat, MALT-O-MEAL, Cocoa DYNO-BITES']
Cluster 22:
['Cereals ready-to-eat, MALT-O-MEAL, COCO-ROOS'
'Cereals ready-to-eat, MALT-O-MEAL, BERRY COLOSSAL CRUNCH'
'Cereals ready-to-eat, MALT-O-MEAL, CHOCOLATE MARSHMALLOW MATEYS'
'Cereals ready-to-eat, MALT-O-MEAL, GOLDEN PUFFS'
'Cereals, ready-to-eat, MALT-O-MEAL, Blueberry Mini SPOONERS']
Cluster 23:
["Cereals ready-to-eat, NATURE'S PATH, Organic FLAX PLUS, Pumpkin Granola"
'Cereals ready-to-eat, SUN COUNTRY, KRETSCHMER Toasted Wheat Bran'
"Cereals ready-to-eat, NATURE'S PATH, Organic FLAX PLUS flakes"
'Cereals ready-to-eat, WEETABIX whole grain cereal'
'Cereals ready-to-eat, RALSTON Enriched Wheat Bran flakes']
Cluster 24:
['Cereals, MALT-O-MEAL, Farina Hot Wheat Cereal, dry'
'Cereals, MALT-O-MEAL, chocolate, dry'
'Cereals, MALT-O-MEAL, original, plain, dry'
'Cereals, MALT-O-MEAL, chocolate, prepared with water, without salt'
'Cereals, MALT-O-MEAL, original, plain, prepared with water, without salt']
Cluster 25:
['Cereals, oats, regular and quick and instant, unenriched, cooked with water (includes boiling and microwaving), with salt'
'Cereals, oats, regular and quick, unenriched, cooked with water (includes boiling and microwaving), without salt'
'Cereals, QUAKER, corn grits, instant, plain, prepared (microwaved or boiling water added), without salt'
'Cereals, oats, instant, fortified, plain, prepared with water (boiling water added or microwaved)']
Cluster 26:
["Cereals ready-to-eat, QUAKER, CAP'N CRUNCH'S PEANUT BUTTER CRUNCH"
'Cereals ready-to-eat, QUAKER, HONEY GRAHAM OH!S'
'Cereals ready-to-eat, QUAKER, KING VITAMAN'
"Cereals ready-to-eat, QUAKER, CAP'N CRUNCH'S Halloween Crunch"
'Cereals ready-to-eat, QUAKER, Toasted Multigrain Crisps']
Cluster 27:
['Cereals, WHEATENA, dry'
'Cereals, WHEATENA, cooked with water, with salt'
'Cereals, WHEATENA, cooked with water']
Cluster 28:
['Millet, puffed']
Cluster 29:
['Cereals, farina, enriched, cooked with water, with salt'
'Cereals, farina, unenriched, dry']
Snacks:
Number of items: 172
Optimal clusters: 32
Silhouette score: 0.166
Sample foods from each Snacks cluster:
Cluster 0:
['Snacks, potato sticks' 'Snacks, potato chips, cheese-flavor'
'Snacks, popcorn, cheese-flavor'
'Snacks, potato chips, sour-cream-and-onion-flavor'
'Snacks, potato chips, made from dried potatoes, cheese-flavor']
Cluster 1:
['Snacks, granola bar, KASHI GOLEAN, chewy, mixed flavors'
'Snacks, granola bites, mixed flavors' 'Snacks, CLIF BAR, mixed flavors'
'Snacks, granola bar, KASHI GOLEAN, crunchy, mixed flavors'
'Snacks, granola bar, KASHI TLC Bar, crunchy, mixed flavors']
Cluster 2:
['Snacks, shrimp cracker' 'Rice and Wheat cereal bar'
'Snacks, crisped rice bar, chocolate chip' 'Snacks, taro chips'
'Snacks, crisped rice bar, almond']
Cluster 3:
['Snacks, KRAFT, CORNNUTS, plain' 'Tortilla chips, yellow, plain, salted'
'Snacks, yucca (cassava) chips, salted'
'Snacks, tortilla chips, unsalted, white corn'
'Snacks, tortilla chips, plain, white corn, salted']
Cluster 4:
['Snacks, rice cakes, brown rice, buckwheat'
'Snacks, rice cakes, brown rice, plain, unsalted'
'Snacks, brown rice chips' 'Snacks, rice cakes, brown rice, sesame seed'
'Snacks, rice cakes, brown rice, sesame seed, unsalted']
Cluster 5:
['Snacks, popcorn, air-popped (Unsalted)'
'Snacks, popcorn, oil-popped, microwave, regular flavor, no trans fat'
'Snacks, popcorn, air-popped'
'Popcorn, microwave, regular (butter) flavor, made with palm oil'
'Snacks, popcorn, home-prepared, oil-popped, unsalted']
Cluster 6:
['Snacks, Pretzels, gluten- free made with cornstarch and potato flour'
'Snacks, pretzels, hard, whole-wheat including both salted and unsalted'
'Snacks, pretzels, hard, plain, made with unenriched flour, salted'
"Snacks, pretzels, hard, confectioner's coating, chocolate-flavor"
'Snacks, pretzels, hard, plain, made with enriched flour, unsalted']
Cluster 7:
['Snacks, granola bars, hard, plain' 'Snacks, granola bars, hard, almond'
'Snacks, granola bars, hard, chocolate chip'
'Snacks, granola bars, hard, peanut butter']
Cluster 8:
['Snacks, potato chips, plain, made with partially hydrogenated soybean oil, salted'
'Snacks, potato chips, plain, made with partially hydrogenated soybean oil, unsalted']
Cluster 9:
['Snacks, corn-based, extruded, chips, unsalted'
'Snacks, corn-based, extruded, onion-flavor'
'Snacks, corn-based, extruded, cones, plain'
'Snacks, corn-based, extruded, chips, plain'
'Snacks, corn-based, extruded, puffs or twists, cheese-flavor, unenriched']
Cluster 10:
['Formulated bar, MARS SNACKFOOD US, SNICKERS MARATHON Honey Nut Oat Bar'
'Formulated bar, ZONE PERFECT CLASSIC CRUNCH BAR, mixed flavors'
'Formulated bar, POWER BAR, chocolate'
'Formulated bar, MARS SNACKFOOD US, SNICKERS MARATHON Protein Performance Bar, Caramel Nut Rush'
'Formulated bar, MARS SNACKFOOD US, SNICKERS MARATHON Energy Bar, all flavors']
Cluster 11:
['Breakfast bars, oats, sugar, raisins, coconut (include granola bar)'
'Snacks, granola bar, fruit-filled, nonfat'
'Snacks, granola bar, with coconut, chocolate coated']
Cluster 12:
['Snacks, granola bars, soft, uncoated, peanut butter and chocolate chip'
'Snacks, granola bars, soft, uncoated, chocolate chip'
'Snacks, granola bars, soft, uncoated, plain'
'Snacks, granola bars, soft, coated, milk chocolate coating, peanut butter'
'Snacks, granola bars, soft, coated, milk chocolate coating, chocolate chip']
Cluster 13:
['Snacks, candy bits, yogurt covered with vitamin C'
'Snacks, candy rolls, yogurt-covered, fruit flavored with high vitamin C'
'Snacks, fruit leather, pieces, with vitamin C']
Cluster 14:
['Snacks, potato chips, made from dried potatoes, reduced fat'
'Snacks, potato chips, reduced fat'
'Snacks, vegetable chips, made from garden vegetables'
'Potato chips, without salt, reduced fat'
'Snacks, potato chips, fat-free, made with olestra']
Cluster 15:
['Snacks, trail mix, tropical'
'Snacks, trail mix, regular, with chocolate chips, salted nuts and seeds'
'Snacks, trail mix, regular, with chocolate chips, unsalted nuts and seeds'
'Snacks, trail mix, regular' 'Snacks, trail mix, regular, unsalted']
Cluster 16:
['Snacks, plantain chips, salted' 'Snacks, sweet potato chips, unsalted'
'Snacks, bagel chips, plain' 'Snacks, potato chips, lightly salted'
'Snacks, potato chips, plain, salted']
Cluster 17:
['Snacks, FRITOLAY, SUNCHIPS, Multigrain Snack, Harvest Cheddar flavor'
'Snacks, FRITOLAY, SUNCHIPS, multigrain, French onion flavor'
'Snacks, FRITOLAY, SUNCHIPS, Multigrain Snack, original flavor'
'Snacks, vegetable chips, HAIN CELESTIAL GROUP, TERRA CHIPS']
Cluster 18:
['Snacks, popcorn, cakes' 'Snacks, popcorn, caramel-coated, with peanuts'
'Snacks, popcorn, caramel-coated, without peanuts']
Cluster 19:
['Snacks, granola bars, QUAKER OATMEAL TO GO, all flavors'
'Snacks, granola bar, QUAKER, chewy, 90 Calorie Bar']
Cluster 20:
['Popcorn, sugar syrup/caramel, fat-free'
'Snacks, popcorn, microwave, low fat'
'Cheese puffs and twists, corn based, baked, low fat'
'Snacks, popcorn, microwave, regular (butter) flavor, made with partially hydrogenated oil'
'Popcorn, microwave, low fat and sodium']
Cluster 21:
['Rice cake, cracker (include hain mini rice cakes)' 'Rice crackers']
Cluster 22:
["Snacks, KELLOGG, KELLOGG'S RICE KRISPIES TREATS Squares"
"Snacks, KELLOGG, KELLOGG'S Low Fat Granola Bar, Crunchy Almond/Brown Sugar"
'Snacks, NUTRI-GRAIN FRUIT AND NUT BAR'
"Snacks, KELLOGG, KELLOGG'S, NUTRI-GRAIN Cereal Bars, fruit"
'Milk and cereal bar']
Cluster 23:
['Snacks, granola bar, GENERAL MILLS, NATURE VALLEY, CHEWY TRAIL MIX'
'Snacks, granola bar, QUAKER, DIPPS, all flavors'
'Snacks, granola bar, GENERAL MILLS NATURE VALLEY, SWEET&SALTY NUT, peanut'
'Snacks, granola bar, GENERAL MILLS, NATURE VALLEY, with yogurt coating']
Cluster 24:
['Snack, BALANCE, original bar' 'Snack, Mixed Berry Bar']
Cluster 25:
['Snacks, sesame sticks, wheat-based, unsalted'
'Snacks, oriental mix, rice-based'
'Snacks, peas, roasted, wasabi-flavored'
'Snacks, sesame sticks, wheat-based, salted']
Cluster 26:
['Snacks, M&M MARS, COMBOS Snacks Cheddar Cheese Pretzel'
'Snacks, M&M MARS, KUDOS Whole Grain Bars, peanut butter'
'Snacks, M&M MARS, KUDOS Whole Grain Bar, chocolate chip'
"Snacks, M&M MARS, KUDOS Whole Grain Bar, M&M's milk chocolate"]
Cluster 27:
['Snacks, tortilla chips, ranch-flavor'
'Snacks, tortilla chips, taco-flavor'
'Snacks, tortilla chips, nacho cheese'
'Snacks, tortilla chips, nacho-flavor, made with enriched masa flour']
Cluster 28:
['Snacks, tortilla chips, nacho-flavor, reduced fat'
'Snacks, tortilla chips, low fat, unsalted'
'Tortilla chips, low fat, baked without fat'
'Snacks, tortilla chips, light (baked with less oil)'
'Snacks, tortilla chips, low fat, made with olestra, nacho cheese']
Cluster 29:
['Snacks, pork skins, plain' 'Snacks, pork skins, barbecue-flavor'
'Snacks, beef jerky, chopped and formed' 'Snacks, beef sticks, smoked']
Cluster 30:
['Pretzels, soft' 'Pretzels, soft, unsalted']
Cluster 31:
['Breakfast bar, corn flake crust with fruit']
Fruits and Fruit Juices:
Number of items: 355
Optimal clusters: 31
Silhouette score: 0.129
Sample foods from each Fruits and Fruit Juices cluster:
Cluster 0:
['Limes, raw' 'Orange peel, raw' 'Lemons, raw, without peel'
'Lemon juice, raw' 'Lime juice, raw']
Cluster 1:
['Pineapple, canned, heavy syrup pack, solids and liquids'
'Grapefruit, sections, canned, water pack, solids and liquids'
'Pineapple, canned, juice pack, solids and liquids'
'Grapefruit, sections, canned, juice pack, solids and liquids'
'Jackfruit, canned, syrup pack']
Cluster 2:
['Java-plum, (jambolan), raw' 'Persimmons, native, raw' 'Pummelo, raw'
'Carissa, (natal-plum), raw' 'Apricots, raw']
Cluster 3:
['Cranberry sauce, canned, sweetened'
'Cranberry sauce, whole, canned, OCEAN SPRAY'
'Cranberry sauce, jellied, canned, OCEAN SPRAY'
'Cranberry-orange relish, canned'
'Cranberry juice blend, 100% juice, bottled, with added vitamin C and calcium']
Cluster 4:
['Plums, canned, purple, heavy syrup pack, solids and liquids'
'Plums, canned, purple, juice pack, solids and liquids'
'Figs, canned, heavy syrup pack, solids and liquids'
'Peaches, canned, water pack, solids and liquids'
'Figs, canned, light syrup pack, solids and liquids']
Cluster 5:
['Cherry juice, tart' 'Orange juice, canned, unsweetened'
'Lemon juice from concentrate, bottled, REAL LEMON' 'Prune juice, canned'
'Lime juice, canned or bottled, unsweetened']
Cluster 6:
['Pineapple juice, canned or bottled, unsweetened, without added ascorbic acid'
'Pineapple juice, canned or bottled, unsweetened, with added ascorbic acid'
'Apple juice, canned or bottled, unsweetened, with added ascorbic acid, calcium, and potassium'
'Grape juice, canned or bottled, unsweetened, with added ascorbic acid and calcium'
'Grape juice, canned or bottled, unsweetened, without added ascorbic acid']
Cluster 7:
['Cherries, sweet, canned, light syrup pack, solids and liquids'
'Cherries, sweet, canned, juice pack, solids and liquids'
'Cherries, sweet, canned, water pack, solids and liquids'
'Cherries, sour, red, canned, light syrup pack, solids and liquids'
'Cherries, sour, red, canned, water pack, solids and liquids']
Cluster 8:
["Raisins, dark, seedless (Includes foods for USDA's Food Distribution Program)"
"Cherries, tart, dried, sweetened (Includes foods for USDA's Food Distribution Program)"
"Blueberries, frozen, unsweetened (Includes foods for USDA's Food Distribution Program)"
"Strawberries, frozen, unsweetened (Includes foods for USDA's Food Distribution Program)"
"Cranberries, dried, sweetened (Includes foods for USDA's Food Distribution Program)"]
Cluster 9:
['Apples, raw, without skin, cooked, microwave'
'Apples, canned, sweetened, sliced, drained, heated'
'Applesauce, canned, sweetened, without salt' 'Candied fruit'
'Apples, dried, sulfured, uncooked']
Cluster 10:
['Plantains, green, fried' 'Plantains, yellow, raw' 'Sapodilla, raw'
'Bananas, raw' 'Breadfruit, raw']
Cluster 11:
['Fruit juice smoothie, BOLTHOUSE FARMS, BERRY BOOST'
'Fruit juice smoothie, BOLTHOUSE FARMS, strawberry banana'
'Fruit juice smoothie, NAKED JUICE, BLUE MACHINE'
'Fruit juice smoothie, NAKED JUICE, MIGHTY MANGO'
'Fruit juice smoothie, NAKED JUICE, strawberry banana']
Cluster 12:
['Rowal, raw' 'Melons, honeydew, raw' 'Rose-apples, raw'
'Gooseberries, raw' 'Roselle, raw']
Cluster 13:
['Grapefruit juice, white, bottled, unsweetened, OCEAN SPRAY'
'Grapefruit juice, white, frozen concentrate, unsweetened, undiluted'
'Grapefruit juice, white, canned or bottled, unsweetened'
'Grapefruit juice, white, frozen concentrate, unsweetened, diluted with 3 volume water'
'Grapefruit juice, white, canned, sweetened']
Cluster 14:
['Grapefruit, raw, pink and red, California and Arizona'
'Passion-fruit juice, purple, raw' 'Passion-fruit juice, yellow, raw'
'Ruby Red grapefruit juice blend (grapefruit, grape, apple), OCEAN SPRAY, bottled, with added vitamin C'
'Grapefruit juice, pink, raw']
Cluster 15:
['Longans, dried' 'Figs, dried, uncooked' 'Persimmons, japanese, dried'
'Mango, dried, sweetened' 'Figs, dried, stewed']
Cluster 16:
['Peach nectar, canned, with sucralose, without added ascorbic acid'
'Pear nectar, canned, without added ascorbic acid'
'Papaya nectar, canned' 'Guanabana nectar, canned'
'Apricot nectar, canned, with added ascorbic acid']
Cluster 17:
['Apricots, dried, sulfured, uncooked' 'Apricots, frozen, sweetened'
'Guava sauce, cooked'
'Apricots, dehydrated (low-moisture), sulfured, uncooked'
'Apricots, dehydrated (low-moisture), sulfured, stewed']
Cluster 18:
['Oranges, raw, California, valencias'
'Oranges, raw, all commercial varieties'
'Tangerines, (mandarin oranges), raw' 'Durian, raw or frozen'
'Melons, casaba, raw']
Cluster 19:
['Pineapple juice, frozen concentrate, unsweetened, undiluted'
'Raspberry juice concentrate'
'Apple juice, frozen concentrate, unsweetened, undiluted, without added ascorbic acid'
'Pineapple juice, frozen concentrate, unsweetened, diluted with 3 volume water'
'Apple juice, frozen concentrate, unsweetened, undiluted, with added ascorbic acid']
Cluster 20:
["Apples, raw, red delicious, with skin (Includes foods for USDA's Food Distribution Program)"
"Apples, raw, with skin (Includes foods for USDA's Food Distribution Program)"
"Apples, frozen, unsweetened, unheated (Includes foods for USDA's Food Distribution Program)"
"Orange juice, raw (Includes foods for USDA's Food Distribution Program)"
"Apples, raw, gala, with skin (Includes foods for USDA's Food Distribution Program)"]
Cluster 21:
['Orange juice, chilled, includes from concentrate, with added calcium'
'Orange juice, frozen concentrate, unsweetened, undiluted, with added calcium'
'Orange juice, frozen concentrate, unsweetened, diluted with 3 volume water, with added calcium'
'Orange juice, chilled, includes from concentrate, with added calcium and vitamin D'
'Orange juice, chilled, includes from concentrate']
Cluster 22:
['Cherries, sweet, canned, pitted, heavy syrup, drained'
'Apricots, canned, heavy syrup, drained'
'Blueberries, wild, canned, heavy syrup, drained'
'Papaya, canned, heavy syrup, drained'
'Blackberries, canned, heavy syrup, solids and liquids']
Cluster 23:
['Grapes, red or green (European type, such as Thompson seedless), raw'
'Currants, red and white, raw' 'Raisins, golden, seedless'
'Raisins, seeded' 'Cherries, sour, red, raw']
Cluster 24:
['Pears, asian, raw' 'Pears, dried, sulfured, stewed, without added sugar'
'Prickly pears, raw'
"Pears, raw, bartlett (Includes foods for USDA's Food Distribution Program)"
'Pears, raw, red anjou']
Cluster 25:
['Olives, ripe, canned (jumbo-super colossal)'
'Olives, ripe, canned (small-extra large)'
'Olives, pickled, canned or bottled, green']
Cluster 26:
['Peaches, dried, sulfured, uncooked'
'Peaches, dehydrated (low-moisture), sulfured, stewed'
'Peaches, dried, sulfured, stewed, without added sugar'
'Peaches, yellow, raw'
'Peaches, dried, sulfured, stewed, with added sugar']
Cluster 27:
['Fruit cocktail, (peach and pineapple and pear and grape and cherry), canned, light syrup, solids and liquids'
'Fruit cocktail, (peach and pineapple and pear and grape and cherry), canned, extra heavy syrup, solids and liquids'
'Fruit salad, (peach and pear and apricot and pineapple and cherry), canned, water pack, solids and liquids'
'Fruit salad, (peach and pear and apricot and pineapple and cherry), canned, extra heavy syrup, solids and liquids'
'Fruit cocktail, (peach and pineapple and pear and grape and cherry), canned, water pack, solids and liquids']
Cluster 28:
['Melon balls, frozen' 'Rhubarb, frozen, cooked, with sugar'
'Rhubarb, frozen, uncooked' 'Strawberries, frozen, sweetened, sliced'
'Boysenberries, frozen, unsweetened']
Cluster 29:
['Apricots, canned, heavy syrup pack, without skin, solids and liquids'
'Apricots, canned, light syrup pack, with skin, solids and liquids'
'Apricots, canned, water pack, without skin, solids and liquids'
'Apricots, canned, extra heavy syrup pack, without skin, solids and liquids'
'Apricots, canned, juice pack, with skin, solids and liquids']
Cluster 30:
['Plums, dried (prunes), uncooked' 'Prune puree' 'Plums, raw'
'Plums, dried (prunes), stewed, with added sugar'
'Plums, dried (prunes), stewed, without added sugar']
Pork Products:
Number of items: 336
Optimal clusters: 20
Silhouette score: 0.183
Sample foods from each Pork Products cluster:
Cluster 0:
['Pork, cured, ham, boneless, regular (approximately 11% fat), roasted'
'Pork, cured, ham, regular (approximately 13% fat), canned, roasted'
'Pork, cured, ham, steak, boneless, extra lean, unheated'
'Pork, cured, ham, boneless, low sodium, extra lean (approximately 5% fat), roasted'
'Pork, cured, ham, boneless, extra lean and regular, unheated']
Cluster 1:
['Pork, fresh, loin, center rib (chops or roasts), boneless, separable lean only, raw'
'Pork, fresh, loin, sirloin (chops or roasts), bone-in, separable lean and fat, raw'
'Pork loin, fresh, backribs, bone-in, raw, lean only'
'Pork, fresh, enhanced, loin, tenderloin, separable lean only, raw'
'Pork, fresh, loin, top loin (chops), boneless, separable lean only, raw']
Cluster 2:
['Pork, cured, ham with natural juices, rump, bone-in, separable lean only, heated, roasted'
'Pork, cured, ham -- water added, rump, bone-in, separable lean only, unheated'
'Pork, cured, ham, rump, bone-in, separable lean and fat, heated, roasted'
'Pork, cured, ham with natural juices, rump, bone-in, separable lean and fat, heated, roasted'
'Pork, cured, ham -- water added, rump, bone-in, separable lean only, heated, roasted']
Cluster 3:
['Pork, cured, ham, whole, separable lean and fat, unheated'
'Pork, cured, ham with natural juices, spiral slice, boneless, separable lean and fat, heated, roasted'
'Pork, cured, ham -- water added, whole, boneless, separable lean and fat, heated, roasted'
'Pork, cured, ham, whole, separable lean only, unheated'
'Pork, cured, ham and water product, whole, boneless, separable lean only, unheated']
Cluster 4:
['Pork, fresh, variety meats and by-products, feet, raw'
'Pork, fresh, variety meats and by-products, mechanically separated, raw'
'Pork, fresh, variety meats and by-products, stomach, raw'
'Pork, fresh, variety meats and by-products, pancreas, raw'
'Pork, fresh, variety meats and by-products, lungs, raw']
Cluster 5:
['Pork, fresh, loin, whole, separable lean only, cooked, braised'
'Pork, fresh, loin, whole, separable lean and fat, cooked, braised'
'Pork, fresh, loin, top loin (chops), boneless, separable lean only, cooked, braised'
'Pork, fresh, loin, center rib (chops), bone-in, separable lean and fat, cooked, braised'
'Pork, fresh, loin, blade (chops), bone-in, separable lean and fat, cooked, braised']
Cluster 6:
['Pork, fresh, loin, whole, separable lean and fat, cooked, broiled'
'Pork, fresh, loin, top loin (chops), boneless, separable lean only, with added solution, cooked, pan-broiled'
'Pork, loin, leg cap steak, boneless, separable lean and fat, cooked, broiled'
'Pork, fresh, loin, center loin (chops), boneless, separable lean only, cooked, pan-broiled'
'Pork, fresh, loin, blade (chops), boneless, separable lean only, boneless, cooked, broiled']
Cluster 7:
['Pork, fresh, loin, whole, separable lean only, cooked, roasted'
'Pork, fresh, loin, country-style ribs, separable lean only, boneless, cooked, roasted'
'Pork, fresh, shoulder, whole, separable lean only, cooked, roasted'
'Pork, fresh, loin, blade (roasts), boneless, separable lean and fat, cooked, roasted'
'Pork, fresh, loin, sirloin (roasts), boneless, separable lean and fat, cooked, roasted']
Cluster 8:
['Pork, fresh, leg (ham), whole, separable lean only, cooked, roasted'
'Pork, fresh, leg (ham), shank half, separable lean and fat, raw'
'Pork, fresh, leg (ham), rump half, separable lean only, cooked, roasted'
'Pork, fresh, leg (ham), whole, separable lean only, raw'
'Pork, fresh, spareribs, separable lean and fat, raw']
Cluster 9:
['Canadian bacon, cooked, pan-fried'
'Bacon, pre-sliced, reduced/low sodium, unprepared'
'Canadian bacon, unprepared']
Cluster 10:
['Pork, cured, bacon, pre-sliced, cooked, pan-fried'
'Pork, cured, feet, pickled' 'Pork, oriental style, dehydrated'
'Pork, cured, bacon, cooked, baked'
'Pork, cured, bacon, cooked, microwaved']
Cluster 11:
['Pork, ground, 72% lean / 28% fat, cooked, crumbles'
'Pork, ground, 84% lean / 16% fat, cooked, pan-broiled'
'Pork, ground, 72% lean / 28% fat, cooked, pan-broiled'
'Pork, ground, 96% lean / 4% fat, cooked, crumbles'
'Pork, ground, 96% lean / 4% fat, cooked, pan-broiled']
Cluster 12:
['HORMEL ALWAYS TENDER, Pork Tenderloin, Teriyaki-Flavored'
'HORMEL ALWAYS TENDER, Center Cut Chops, Fresh Pork'
'HORMEL ALWAYS TENDER, Pork Loin Filets, Lemon Garlic-Flavored'
'HORMEL ALWAYS TENDER, Boneless Pork Loin, Fresh Pork'
'HORMEL ALWAYS TENDER, Pork Tenderloin, Peppercorn-Flavored']
Cluster 13:
['Pork, cured, ham with natural juices, slice, boneless, separable lean and fat, heated, pan-broil'
'Pork, cured, ham and water product, slice, bone-in, separable lean only, heated, pan-broil'
'Pork, cured, ham and water product, slice, boneless, separable lean and fat, heated, pan-broil'
'Pork, cured, ham -- water added, slice, boneless, separable lean and fat, heated, pan-broil'
'Pork, cured, ham and water product, slice, bone-in, separable lean and fat, heated, pan-broil']
Cluster 14:
['HORMEL, Cure 81 Ham' 'HORMEL Canadian Style Bacon']
Cluster 15:
['Pork, fresh, shoulder, (Boston butt), blade (steaks), separable lean and fat, with added solution, cooked, braised'
'Pork, fresh, shoulder, blade, boston (steaks), separable lean only, cooked, broiled'
'Pork, fresh, shoulder, (Boston butt), blade (steaks), separable lean only, with added solution, raw'
'Pork, fresh, shoulder, blade, boston (roasts), separable lean only, cooked, roasted'
'Pork, fresh, shoulder, (Boston butt), blade (steaks), separable lean and fat, raw']
Cluster 16:
['Pork, fresh, variety meats and by-products, stomach, cooked, simmered'
'Pork, fresh, variety meats and by-products, spleen, cooked, braised'
'Pork, fresh, variety meats and by-products, kidneys, cooked, braised'
'Pork, fresh, variety meats and by-products, lungs, cooked, braised'
'Pork, fresh, variety meats and by-products, liver, cooked, braised']
Cluster 17:
['Pork, fresh, composite of trimmed leg, loin, shoulder, and spareribs, (includes cuts to be cured), separable lean and fat, raw'
'Pork, fresh, composite of separable fat, with added solution, raw'
'Pork, fresh, composite of trimmed retail cuts (loin and shoulder blade), separable lean and fat, raw'
'Pork, fresh, composite of trimmed retail cuts (leg, loin, shoulder, and spareribs), separable lean and fat, raw'
'Pork, fresh, separable fat, cooked']
Cluster 18:
['Pork, fresh, shoulder, arm picnic, separable lean and fat, cooked, roasted'
'Pork, cured, shoulder, arm picnic, separable lean and fat, roasted'
'Pork, cured, shoulder, arm picnic, separable lean only, roasted'
'Pork, fresh, shoulder, arm picnic, separable lean and fat, raw'
'Pork, fresh, shoulder, arm picnic, separable lean only, cooked, braised']
Cluster 19:
['Pork, cured, ham, shank, bone-in, separable lean and fat, unheated'
'Pork, cured, ham -- water added, shank, bone-in, separable lean only, unheated'
'Pork, cured, ham with natural juices, shank, bone-in, separable lean only, heated, roasted'
'Pork, cured, ham and water product, shank, bone-in, separable lean only, heated, roasted'
'Pork, cured, ham -- water added, shank, bone-in, separable lean and fat, unheated']
Vegetables and Vegetable Products:
Number of items: 814
Optimal clusters: 24
Silhouette score: 0.136
Sample foods from each Vegetables and Vegetable Products cluster:
Cluster 0:
['Potatoes, french fried, all types, salt added in processing, frozen, unprepared'
'Onion rings, breaded, par fried, frozen, unprepared'
'Potatoes, french fried, steak cut, salt not added in processing, frozen, unprepared'
'Potatoes, frozen, french fried, par fried, extruded, prepared, heated in oven, without salt'
'Potatoes, french fried, steak fries, salt added in processing, frozen, oven-heated']
Cluster 1:
['Corn, sweet, yellow, frozen, kernels, cut off cob, boiled, drained, with salt'
'Corn, sweet, yellow, raw'
'Corn, sweet, yellow, cooked, boiled, drained, with salt'
'Corn, sweet, white, canned, vacuum pack, regular pack'
'Corn, sweet, yellow, cooked, boiled, drained, without salt']
Cluster 2:
['Borage, raw' 'Bamboo shoots, raw' 'Mountain yam, hawaii, raw'
'Waterchestnuts, chinese, (matai), raw' 'Kanpyo, (dried gourd strips)']
Cluster 3:
['Chrysanthemum, garland, cooked, boiled, drained, with salt'
'Pumpkin leaves, cooked, boiled, drained, with salt'
'Balsam-pear (bitter gourd), pods, cooked, boiled, drained, without salt'
'Pumpkin leaves, cooked, boiled, drained, without salt'
'Pumpkin flowers, cooked, boiled, drained, without salt']
Cluster 4:
['Cowpeas (blackeyes), immature seeds, raw' 'Radishes, oriental, dried'
'Onions, welsh, raw' 'Soybeans, green, raw' "Jew's ear, (pepeao), raw"]
Cluster 5:
['Squash, winter, spaghetti, cooked, boiled, drained, or baked, without salt'
'Squash, winter, hubbard, baked, without salt'
'Squash, summer, scallop, cooked, boiled, drained, without salt'
'Squash, summer, zucchini, includes skin, frozen, cooked, boiled, drained, without salt'
'Squash, winter, hubbard, cooked, boiled, mashed, with salt']
Cluster 6:
['Onions, frozen, chopped, unprepared'
'Broccoli, frozen, chopped, unprepared' 'Edamame, frozen, unprepared'
'Lima beans, immature seeds, frozen, baby, unprepared'
'Vegetables, mixed, frozen, unprepared']
Cluster 7:
['Pimento, canned' 'Sweet potato, canned, vacuum pack' 'Butterbur, canned'
'Sweet potato, canned, mashed' 'Mushrooms, straw, canned, drained solids']
Cluster 8:
['Asparagus, canned, regular pack, solids and liquids'
'Beets, canned, no salt added, solids and liquids'
'Peas, green, canned, seasoned, solids and liquids'
'Peas and carrots, canned, regular pack, solids and liquids'
'Beets, pickled, canned, solids and liquids']
Cluster 9:
['Soybeans, green, cooked, boiled, drained, with salt'
'Lima beans, immature seeds, frozen, fordhook, cooked, boiled, drained, with salt'
'Mung beans, mature seeds, sprouted, cooked, boiled, drained, without salt'
'Hyacinth-beans, immature seeds, cooked, boiled, drained, without salt'
'Broadbeans, immature seeds, cooked, boiled, drained, without salt']
Cluster 10:
['Peppers, sweet, red, raw' 'Peppers, chili, green, canned'
'Peppers, pasilla, dried'
'Peppers, sweet, red, frozen, chopped, unprepared'
'Peppers, sweet, red, frozen, chopped, boiled, drained, with salt']
Cluster 11:
['Potatoes, au gratin, home-prepared from recipe using margarine'
'Potatoes, mashed, dehydrated, prepared from flakes without milk, whole milk and butter added'
'Potatoes, au gratin, home-prepared from recipe using butter'
'Potato pancakes'
'Potatoes, mashed, dehydrated, flakes without milk, dry form']
Cluster 12:
['Cabbage, red, cooked, boiled, drained, with salt'
'Beets, cooked, boiled, drained'
'Cabbage, chinese (pak-choi), cooked, boiled, drained, with salt'
'Beets, cooked, boiled. drained, with salt'
'Artichokes, (globe or french), cooked, boiled, drained, with salt']
Cluster 13:
['Broccoli, frozen, spears, cooked, boiled, drained, without salt'
'Broccoli, frozen, chopped, cooked, boiled, drained, without salt'
'Brussels sprouts, frozen, cooked, boiled, drained, without salt'
'Peas, green, frozen, cooked, boiled, drained, without salt'
'Asparagus, frozen, cooked, boiled, drained, with salt']
Cluster 14:
['Succotash, (corn and limas), canned, with cream style corn'
'Succotash, (corn and limas), frozen, cooked, boiled, drained, without salt'
'Succotash, (corn and limas), cooked, boiled, drained, with salt'
'Succotash, (corn and limas), cooked, boiled, drained, without salt'
'Succotash, (corn and limas), raw']
Cluster 15:
['Cornsalad, raw' 'Onions, raw' 'Mustard spinach, (tendergreen), raw'
'Chicory roots, raw' 'Coriander (cilantro) leaves, raw']
Cluster 16:
['Potatoes, hash brown, frozen, plain, unprepared'
'Potatoes, hash brown, home-prepared'
'Potatoes, hash brown, refrigerated, unprepared'
"Sweet potato, frozen, unprepared (Includes foods for USDA's Food Distribution Program)"
"Potatoes, o'brien, frozen, prepared"]
Cluster 17:
['Potatoes, Russet, flesh and skin, baked'
'Potatoes, microwaved, cooked, in skin, skin with salt'
'Sweet potato, cooked, boiled, without skin, with salt'
'Sweet potato, frozen, cooked, baked, with salt'
'Potatoes, boiled, cooked without skin, flesh, with salt']
Cluster 18:
['Beans, pinto, immature seeds, frozen, cooked, boiled, drained, with salt'
'Beans, pinto, immature seeds, frozen, unprepared'
'Beans, snap, green, canned, no salt added, solids and liquids'
'Beans, pinto, mature seeds, sprouted, cooked, boiled, drained, with salt'
'Beans, snap, yellow, cooked, boiled, drained, without salt']
Cluster 19:
['Tomato juice, canned, with salt added'
'Tomatoes, red, ripe, canned, stewed' 'Catsup, low sodium'
'Tomatoes, red, ripe, cooked, with salt'
'Tomato products, canned, puree, with salt added']
Cluster 20:
['Mustard greens, frozen, cooked, boiled, drained, without salt'
'Turnip greens, frozen, cooked, boiled, drained, without salt'
'Spinach, cooked, boiled, drained, without salt'
'Mustard greens, cooked, boiled, drained, with salt'
'Turnips, cooked, boiled, drained, with salt']
Cluster 21:
['Mushrooms, shiitake, raw' 'Mushrooms, shiitake, dried'
'Mushrooms, oyster, raw' 'Mushrooms, brown, italian, or crimini, raw'
'Mushrooms, brown, italian, or crimini, exposed to ultraviolet light, raw']
Cluster 22:
['Seaweed, spirulina, raw' 'Seaweed, wakame, raw'
'Seaweed, spirulina, dried' 'Seaweed, irishmoss, raw'
'Yeast extract spread']
Cluster 23:
['Pickles, cucumber, sweet, low sodium (includes bread and butter pickles)'
'Pickles, cucumber, dill or kosher dill' 'Cabbage, kimchi'
'Pickle relish, hamburger' 'Radishes, hawaiian style, pickled']
Nut and Seed Products:
Number of items: 137
Optimal clusters: 2
Silhouette score: 0.226
Sample foods from each Nut and Seed Products cluster:
Cluster 0:
['Seeds, cottonseed kernels, roasted (glandless)'
'Seeds, pumpkin and squash seed kernels, roasted, with salt added'
'Seeds, sesame seed kernels, toasted, without salt added (decorticated)'
'Seeds, sunflower seed kernels from shell, dry roasted, with salt added'
'Seeds, cottonseed meal, partially defatted (glandless)']
Cluster 1:
['Nuts, mixed nuts, oil roasted, with peanuts, lightly salted'
'Nuts, butternuts, dried'
'Nuts, cashew nuts, dry roasted, with salt added'
'Nuts, chestnuts, european, dried, unpeeled'
'Nuts, almonds, dry roasted, without salt added']
Beef Products:
Number of items: 954
Optimal clusters: 32
Silhouette score: 0.206
Sample foods from each Beef Products cluster:
Cluster 0:
['Beef, chuck, shoulder clod, shoulder tender, medallion, separable lean and fat, trimmed to 0" fat, select, cooked, grilled'
'Beef, chuck, shoulder clod, shoulder top and center steaks, separable lean and fat, trimmed to 0" fat, all grades, cooked, grilled'
'Beef, chuck, top blade, separable lean and fat, trimmed to 0" fat, all grades, cooked, broiled'
'Beef, chuck, mock tender steak, separable lean only, trimmed to 0" fat, all grades, cooked, broiled'
'Beef, chuck, top blade, separable lean only, trimmed to 0" fat, all grades, cooked, broiled']
Cluster 1:
['Beef, round, eye of round roast, boneless, separable lean only, trimmed to 0" fat, choice, cooked, roasted'
'Beef, round, bottom round, roast, separable lean only, trimmed to 1/8" fat, all grades, cooked'
'Beef, round, bottom round, roast, separable lean and fat, trimmed to 0" fat, choice, cooked, roasted'
'Beef, round, tip round, roast, separable lean and fat, trimmed to 0" fat, choice, cooked, roasted'
'Beef, chuck, clod roast, separable lean only, trimmed to 0" fat, select, cooked, roasted']
Cluster 2:
['Beef, loin, tenderloin roast, boneless, separable lean and fat, trimmed to 0" fat, all grades, raw'
'Beef, top loin petite roast, boneless, separable lean and fat, trimmed to 1/8" fat, select, cooked, roasted'
'Beef, loin, tenderloin roast, boneless, separable lean only, trimmed to 0" fat, all grades, cooked, roasted'
'Beef, loin, tenderloin roast, boneless, separable lean only, trimmed to 0" fat, choice, cooked, roasted'
'Beef, top loin petite roast, boneless, separable lean only, trimmed to 1/8" fat, choice, cooked, roasted']
Cluster 3:
['Beef, New Zealand, imported, ribs prepared, cooked, fast roasted'
'Beef, New Zealand, imported, chuck eye roll, separable lean and fat, cooked, braised'
'Beef, New Zealand, imported, oyster blade, separable lean only, cooked, braised'
'Beef, New Zealand, imported, flank, separable lean and fat, cooked, braised'
'Beef, New Zealand, imported, manufacturing beef, cooked, boiled']
Cluster 4:
['Beef, chuck eye steak, boneless, separable lean only, trimmed to 0" fat, all grades, raw'
'Beef, rib eye steak/roast, bone-in, lip-on, separable lean only, trimmed to 1/8" fat, all grades, raw'
'Beef, chuck eye steak, boneless, separable lean and fat, trimmed to 0" fat, select, raw'
'Beef, rib eye steak/roast, boneless, lip-on, separable lean only, trimmed to 1/8" fat, select, raw'
'Beef, ribeye cap steak, boneless, separable lean only, trimmed to 0" fat, all grades, raw']
Cluster 5:
['Beef, rib, large end (ribs 6-9), separable lean and fat, trimmed to 1/8" fat, prime, cooked, roasted'
'Beef, rib, large end (ribs 6-9), separable lean and fat, trimmed to 0" fat, choice, cooked, roasted'
'Beef, rib, small end (ribs 10-12), separable lean only, trimmed to 1/8"fat, choice, cooked, broiled'
'Beef, rib, large end (ribs 6-9), separable lean and fat, trimmed to 1/8" fat, all grades, raw'
'Beef, rib, whole (ribs 6-12), separable lean and fat, trimmed to 1/8" fat, select, cooked, broiled']
Cluster 6:
['Beef, variety meats and by-products, tongue, cooked, simmered'
'Beef, variety meats and by-products, brain, cooked, pan-fried'
'Beef, cured, breakfast strips, raw or unheated'
'Beef, variety meats and by-products, heart, cooked, simmered'
'Beef, variety meats and by-products, thymus, cooked, braised']
Cluster 7:
['Beef, cured, corned beef, brisket, cooked'
'Beef, brisket, flat half, separable lean only, trimmed to 0" fat, all grades, cooked, braised'
'Beef, brisket, flat half, boneless, separable lean only, trimmed to 0" fat, choice, raw'
'Beef, brisket, flat half, separable lean only, trimmed to 1/8" fat, choice, cooked, braised'
'Beef, brisket, flat half, boneless separable lean only, trimmed to 0" fat, all grades, raw']
Cluster 8:
['Beef, shoulder top blade steak, boneless, separable lean and fat, trimmed to 0" fat, select, cooked, grilled'
'Beef, top loin filet, boneless, separable lean only, trimmed to 1/8" fat, select, cooked, grilled'
'Beef, short loin, porterhouse steak, separable lean and fat, trimmed to 1/8" fat, choice, cooked, grilled'
'Beef, shoulder top blade steak, boneless, separable lean and fat, trimmed to 0" fat, all grades, cooked, grilled'
'Beef, loin, top loin steak, boneless, lip off, separable lean and fat, trimmed to 0" fat, select, cooked, grilled']
Cluster 9:
['Beef, Australian, imported, grass-fed, loin, tenderloin steak/roast, boneless, separable lean only, raw'
'Beef, Australian, imported, grass-fed, rib, ribeye steak/roast lip-on, boneless, separable lean only, raw'
'Beef, Australian, imported, grass-fed, rib, ribeye steak/roast lip-on, boneless, separable lean and fat, raw'
'Beef, Australian, imported, grass-fed, loin, tenderloin steak/roast, boneless, separable lean and fat, raw'
'Beef, Australian, imported, grass-fed, round, bottom round steak/roast, boneless, separable lean and fat, raw']
Cluster 10:
['Beef, ground, 95% lean meat / 5% fat, loaf, cooked, baked'
'Beef, ground, 75% lean meat / 25% fat, loaf, cooked, baked'
'Beef, ground, 97% lean meat / 3% fat, loaf, cooked, baked'
'Beef, ground, 70% lean meat / 30% fat, loaf, cooked, baked'
"Beef, ground, 85% lean meat / 15% fat, raw (Includes foods for USDA's Food Distribution Program)"]
Cluster 11:
['Beef, plate steak, boneless, outside skirt, separable lean and fat, trimmed to 0" fat, select, raw'
'Beef, plate steak, boneless, inside skirt, separable lean only, trimmed to 0" fat, select, cooked, grilled'
'Beef, plate steak, boneless, inside skirt, separable lean only, trimmed to 0" fat, all grades, raw'
'Beef, plate steak, boneless, outside skirt, separable lean and fat, trimmed to 0" fat, all grades, cooked, grilled'
'Beef, plate steak, boneless, outside skirt, separable lean only, trimmed to 0" fat, all grades, raw']
Cluster 12:
['Beef, loin, top sirloin cap steak, boneless, separable lean only, trimmed to 1/8" fat, all grades, raw'
'Beef, loin, top sirloin cap steak, boneless, separable lean and fat, trimmed to 1/8" fat, all grades, raw'
'Beef, top sirloin, steak, separable lean only, trimmed to 1/8" fat, all grades, raw'
'Beef, loin, top loin steak, boneless, lip off, separable lean only, trimmed to 0" fat, all grades, raw'
'Beef, chuck, under blade center steak, boneless, Denver Cut, separable lean and fat, trimmed to 0" fat, all grades, raw']
Cluster 13:
['Beef, round, eye of round steak, boneless, separable lean only, trimmed to 0" fat, select, cooked, grilled'
'Beef, rib eye steak, boneless, lip-on, separable lean only, trimmed to 1/8" fat, select, cooked, grilled'
'Beef, rib eye steak, boneless, lip off, separable lean only, trimmed to 0" fat, select, cooked, grilled'
'Beef, shoulder steak, boneless, separable lean and fat, trimmed to 0" fat, all grades, cooked, grilled'
'Beef, chuck eye steak, boneless, separable lean only, trimmed to 0" fat, select, cooked, grilled']
Cluster 14:
['Beef, round, top round, separable lean only, trimmed to 0" fat, choice, cooked, braised'
'Beef, round, bottom round, steak, separable lean only, trimmed to 0" fat, select, cooked, braised'
'Beef, round, top round steak, boneless, separable lean and fat, trimmed to 0" fat, choice, cooked, grilled'
'Beef, round, top round steak, boneless, separable lean only, trimmed to 0" fat, choice, cooked, grilled'
'Beef, round, top round, steak, separable lean and fat, trimmed to 1/8" fat, choice, cooked, broiled']
Cluster 15:
['Beef, ground, 95% lean meat / 5% fat, patty, cooked, broiled'
'Beef, ground, 93% lean meat / 7% fat, patty, cooked, broiled'
'Beef, ground, 93% lean meat /7% fat, patty, cooked, pan-broiled'
'Beef, ground, 80% lean meat / 20% fat, patty, cooked, pan-broiled'
'Beef, ground, 70% lean meat / 30% fat, patty cooked, pan-broiled']
Cluster 16:
['Beef, composite of trimmed retail cuts, separable lean only, trimmed to 1/8" fat, select, cooked'
'Beef, composite of trimmed retail cuts, separable lean and fat, trimmed to 1/8" fat, choice, raw'
'Beef, composite of trimmed retail cuts, separable lean and fat, trimmed to 1/8" fat, select, cooked'
'Beef, retail cuts, separable fat, raw'
'Beef, composite of trimmed retail cuts, separable lean only, trimmed to 1/8" fat, all grades, raw']
Cluster 17:
['Beef, chuck, mock tender steak, boneless, separable lean and fat, trimmed to 0" fat, choice, raw'
'Beef, tenderloin, separable lean and fat, trimmed to 1/8" fat, prime, raw'
'Beef, tenderloin, steak, separable lean and fat, trimmed to 1/8" fat, choice, raw'
'Beef, tenderloin, steak, separable lean only, trimmed to 1/8" fat, select, raw'
'Beef, loin, tenderloin steak, boneless, separable lean only, trimmed to 0" fat, choice, raw']
Cluster 18:
['Beef, chuck, under blade pot roast, boneless, separable lean only, trimmed to 0" fat, all grades, cooked, braised'
'Beef, chuck, under blade steak, boneless, separable lean and fat, trimmed to 0" fat, choice, cooked, braised'
'Beef, chuck, mock tender steak, boneless, separable lean only, trimmed to 0" fat, choice, cooked, braised'
'Beef, chuck, arm pot roast, separable lean and fat, trimmed to 0" fat, select, cooked, braised'
'Beef, chuck, mock tender steak, boneless, separable lean and fat, trimmed to 0" fat, all grades, cooked, braised']
Cluster 19:
['Beef, New Zealand, imported, oyster blade, separable lean only, raw'
'Beef, New Zealand, imported, bolar blade, separable lean only, cooked, fast roasted'
'Beef, New Zealand, imported, bolar blade, separable lean and fat, raw'
'Beef, New Zealand, imported, oyster blade, separable lean and fat, raw'
'Beef, New Zealand, imported, bolar blade, separable lean and fat, cooked, fast roasted']
Cluster 20:
['Beef, chuck eye Country-Style ribs, boneless, separable lean and fat, trimmed to 0" fat, select, cooked, braised'
'Beef, chuck eye Country-Style ribs, boneless, separable lean only, trimmed to 0" fat, all grades, raw'
'Beef, chuck eye Country-Style ribs, boneless, separable lean only, trimmed to 0" fat, select, cooked, braised'
'Beef, chuck eye Country-Style ribs, boneless, separable lean only, trimmed to 0" fat, choice, cooked, braised'
'Beef, chuck eye Country-Style ribs, boneless, separable lean only, trimmed to 0" fat, all grades, cooked, braised']
Cluster 21:
['Beef, chuck eye roast, boneless, America\'s Beef Roast, separable lean and fat, trimmed to 0" fat, all grades, raw'
'Beef, chuck eye roast, boneless, America\'s Beef Roast, separable lean only, trimmed to 0" fat, choice, raw'
'Beef, rib eye roast, bone-in, lip-on, separable lean and fat, trimmed to 1/8" fat, choice, cooked, roasted'
'Beef, chuck eye roast, boneless, America\'s Beef Roast, separable lean only, trimmed to 0" fat, select, cooked, roasted'
'Beef, chuck eye roast, boneless, America\'s Beef Roast, separable lean and fat, trimmed to 0" fat, choice, cooked, roasted']
Cluster 22:
['Beef, variety meats and by-products, heart, raw'
'Beef, variety meats and by-products, pancreas, raw'
'Beef, variety meats and by-products, suet, raw'
'Beef, variety meats and by-products, brain, raw'
'Beef, variety meats and by-products, mechanically separated beef, raw']
Cluster 23:
['Beef, short loin, porterhouse steak, separable lean only, trimmed to 0" fat, choice, cooked, broiled'
'Beef, short loin, porterhouse steak, separable lean and fat, trimmed to 0" fat, USDA choice, cooked, broiled'
'Beef, tenderloin, steak, separable lean and fat, trimmed to 1/8" fat, all grades, cooked, broiled'
'Beef, short loin, t-bone steak, separable lean and fat, trimmed to 0" fat, USDA choice, cooked, broiled'
'Beef, top sirloin, steak, separable lean only, trimmed to 0" fat, choice, cooked, broiled']
Cluster 24:
['Beef, New Zealand, imported, variety meats and by-products, heart, cooked, boiled'
'Beef, New Zealand, imported, variety meats and by-products, tripe cooked, boiled'
'Beef, New Zealand, imported, variety meats and by-products liver, cooked, boiled'
'Beef, New Zealand, imported, variety meats and by-products, liver, raw'
'Beef, New Zealand, imported, variety meats and by-products, kidney, cooked, boiled']
Cluster 25:
['Beef, chuck for stew, separable lean and fat, select, raw'
'Beef, shoulder pot roast or steak, boneless, separable lean only, trimmed to 0" fat, all grades, raw'
'Beef, chuck, arm pot roast, separable lean only, trimmed to 1/8" fat, choice, raw'
'Beef, chuck, under blade pot roast or steak, boneless, separable lean and fat, trimmed to 0" fat, all grades, raw'
'Beef, chuck, clod roast, separable lean only, trimmed to 1/4" fat, all grades, raw']
Cluster 26:
['Beef, New Zealand, imported, brisket point end, separable lean only, raw'
'Beef, New Zealand, imported, hind shin, separable lean and fat, raw'
'Beef, New Zealand, imported, hind shin, separable lean only, raw'
'Beef, New Zealand, imported, tenderloin, separable lean and fat, raw'
'Beef, New Zealand, imported, flank, separable lean only, raw']
Cluster 27:
['Beef, top sirloin, steak, separable lean only, trimmed to 1/8" fat, choice, raw'
'Beef, loin, top loin steak, boneless, lip off, separable lean only, trimmed to 0" fat, choice, raw'
'Beef, chuck, shoulder clod, top blade, steak, separable lean and fat, trimmed to 0" fat, select, raw'
'Beef, top sirloin, steak, separable lean only, trimmed to 1/8" fat, select, raw'
'Beef, chuck, under blade center steak, boneless, Denver Cut, separable lean and fat, trimmed to 0" fat, choice, raw']
Cluster 28:
['Beef, Australian, imported, Wagyu, loin, tenderloin steak/roast, boneless, separable lean and fat, Aust. marble score 4/5, raw'
'Beef, Australian, imported, Wagyu, loin, tenderloin steak/roast, boneless, separable lean only, Aust. marble score 9, raw'
'Beef, Australian, imported, Wagyu, loin, top loin steak/roast, boneless, separable lean and fat, Aust. marble score 4/5, raw'
'Beef, Australian, imported, Wagyu, external fat, Aust. marble score 9, raw'
'Beef, Australian, imported, Wagyu, loin, top loin steak/roast, boneless, separable lean only, Aust. marble score 4/5, raw']
Cluster 29:
['Beef, round, top round roast, boneless, separable lean and fat, trimmed to 0" fat, choice, raw'
'Beef, round, top round roast, boneless, separable lean only, trimmed to 0" fat, choice, raw'
'Beef, round, outside round, bottom round, steak, separable lean and fat, trimmed to 0" fat, all grades, raw'
'Beef, round, knuckle, tip side, steak, separable lean and fat, trimmed to 0" fat, choice, raw'
'Beef, round, eye of round, roast, separable lean and fat, trimmed to 1/8" fat, choice, raw']
Cluster 30:
['Beef, rib, back ribs, bone-in, separable lean only, trimmed to 0" fat, all grades, cooked, braised'
'Beef, chuck, short ribs, boneless, separable lean and fat, trimmed to 0" fat, select, raw'
'Beef, rib, whole (ribs 6-12), separable lean and fat, trimmed to 1/8" fat, prime, raw'
'Beef, chuck, short ribs, boneless, separable lean only, trimmed to 0" fat, select, raw'
'Beef, carcass, separable lean and fat, select, raw']
Cluster 31:
['Beef, loin, bottom sirloin butt, tri-tip roast, separable lean only, trimmed to 0" fat, all grades, cooked, roasted'
'Beef, bottom sirloin, tri-tip roast, separable lean and fat, trimmed to 0" fat, select, cooked, roasted'
'Beef, bottom sirloin, tri-tip roast, separable lean and fat, trimmed to 0" fat, select, raw'
'Beef, bottom sirloin, tri-tip roast, separable lean only, trimmed to 0" fat, select, raw'
'Beef, bottom sirloin, tri-tip roast, separable lean and fat, trimmed to 0" fat, choice, cooked, roasted']
Beverages:
Number of items: 325
Optimal clusters: 31
Silhouette score: 0.160
Sample foods from each Beverages cluster:
Cluster 0:
['Beverages, coffee, instant, with whitener, reduced calorie'
'Beverages, coffee substitute, cereal grain beverage, prepared with water'
'Beverages, coffee and cocoa, instant, decaffeinated, with whitener and low calorie sweetener'
'Beverages, coffee substitute, cereal grain beverage, powder'
'Beverages, coffee substitute, cereal grain beverage, powder, prepared with whole milk']
Cluster 1:
['Beverages, fruit-flavored drink, powder, with high vitamin C with other added vitamins, low calorie'
'Beverages, cranberry-apple juice drink, low calorie, with vitamin C added'
'Beverages, Vegetable and fruit juice drink, reduced calorie, with low-calorie sweetener, added vitamin C'
'Beverages, MOTTS, Apple juice light, fortified with vitamin C'
'Beverages, Lemonade fruit juice drink light, fortified with vitamin E and C']
Cluster 2:
['Alcoholic beverage, beer, light, BUD LIGHT'
'Alcoholic beverage, beer, light, low carb'
'Alcoholic beverages, beer, higher alcohol'
'Alcoholic beverage, wine, light'
'Alcoholic beverage, beer, regular, all']
Cluster 3:
['Lemonade, frozen concentrate, white' 'Beverages, Lemonade, powder'
'Beverages, lemonade, frozen concentrate, pink, prepared with water'
'Limeade, frozen concentrate, prepared with water'
'Lemonade, frozen concentrate, pink']
Cluster 4:
['Beverages, OCEAN SPRAY, Cran Grape'
'Beverages, OCEAN SPRAY, Cran-Energy, Cranberry Energy Juice Drink'
'Beverages, OCEAN SPRAY, Cranberry-Apple Juice Drink, bottled'
'Beverages, OCEAN SPRAY, Cran Pomegranate'
'Beverages, OCEAN SPRAY, Cran Raspberry Juice Drink']
Cluster 5:
['Beverages, water, tap, municipal' 'Beverages, water, tap, well'
'Beverages, DANNON, water, bottled, non-carbonated, with Fluoride'
'Water, bottled, generic' 'Beverages, water, bottled, PERRIER']
Cluster 6:
['Beverages, OVALTINE, chocolate malt powder'
'Beverages, rich chocolate, powder'
'Beverages, Eggnog-flavor mix, powder, prepared with whole milk'
'Beverages, chocolate-flavor beverage mix for milk, powder, with added nutrients'
'Beverages, Strawberry-flavor beverage mix, powder, prepared with whole milk']
Cluster 7:
['Whiskey sour mix, bottled, with added potassium and sodium'
'Alcoholic beverage, whiskey sour, prepared with water, whiskey and powder mix'
'Beverages, Whiskey sour mix, powder'
'Beverages, Whiskey sour mix, bottled' 'Alcoholic beverage, whiskey sour']
Cluster 8:
['Beverages, Coconut water, ready-to-drink, unsweetened'
'Beverages, chocolate drink, milk and soy based, ready to drink, fortified'
'Beverages, rice milk, unsweetened'
'Beverages, chocolate almond milk, unsweetened, shelf-stable, fortified with vitamin D2 and E'
'Beverages, almond milk, unsweetened, shelf stable']
Cluster 9:
['Beverages, Energy drink, RED BULL'
'Beverages, Energy drink, FULL THROTTLE'
'Beverages, Energy drink, ROCKSTAR'
'Beverages, Energy drink, ROCKSTAR, sugar free']
Cluster 10:
['Beverages, Meal supplement drink, canned, peanut flavor'
'Beverages, grape drink, canned'
'Beverages, cranberry-grape juice drink, bottled'
'Beverages, pineapple and orange juice drink, canned'
'Beverages, Kiwi Strawberry Juice Drink']
Cluster 11:
['Beverages, V8 SPLASH Smoothies, Strawberry Banana'
'Beverages, V8 V-FUSION Juices, Strawberry Banana'
'Beverages, V8 SPLASH Juice Drinks, Diet Strawberry Kiwi'
'Beverages, V8 V-FUSION Juices, Tropical'
'Beverages, V8 V- FUSION Juices, Acai Berry']
Cluster 12:
['Alcoholic beverage, wine, table, red'
'Alcoholic beverage, wine, table, white'
'Alcoholic beverage, wine, dessert, dry'
'Alcoholic beverage, wine, cooking' 'Alcoholic beverage, rice (sake)']
Cluster 13:
['Beverages, ARIZONA, tea, ready-to-drink, lemon'
'Beverages, tea, black, ready to drink, decaffeinated, diet'
"Beverages, WENDY'S, tea, ready-to-drink, unsweetened"
'Beverages, LIPTON BRISK, tea, black, ready-to-drink, lemon'
'Beverages, SNAPPLE, tea, black and green, ready to drink, lemon, diet']
Cluster 14:
['Alcoholic beverage, distilled, all (gin, rum, vodka, whiskey) 94 proof'
'Alcoholic beverage, distilled, rum, 80 proof'
'Alcoholic beverage, distilled, all (gin, rum, vodka, whiskey) 100 proof'
'Alcoholic beverage, distilled, whiskey, 86 proof'
'Alcoholic beverage, distilled, all (gin, rum, vodka, whiskey) 80 proof']
Cluster 15:
['Beverages, orange-flavor drink, breakfast type, powder'
'Beverages, Orange-flavor drink, breakfast type, low calorie, powder'
'Beverages, Orange drink, breakfast type, with juice and pulp, frozen concentrate'
'Beverages, Orange drink, breakfast type, with juice and pulp, frozen concentrate, prepared with water'
'Beverages, Orange-flavor drink, breakfast type, with pulp, frozen concentrate, prepared with water']
Cluster 16:
['Beverages, ABBOTT, ENSURE PLUS, ready-to-drink'
'Beverages, Whey protein powder isolate'
'Beverages, ABBOTT, EAS soy protein powder'
'Beverages, ABBOTT, EAS whey protein powder'
'Beverages, ABBOTT, ENSURE, Nutritional Shake, Ready-to-Drink']
Cluster 17:
['Beverages, COCA-COLA, POWERADE, lemon-lime flavored, ready-to-drink'
'Beverages, THE COCA-COLA COMPANY, NOS Zero, energy drink, sugar-free with guarana, fortified with vitamins B6 and B12'
'Beverages, PEPSICO QUAKER, Gatorade, G performance O 2, ready-to-drink.'
'Beverages, The COCA-COLA company, Glaceau Vitamin Water, Revive Fruit Punch, fortified'
'Beverages, THE COCA-COLA COMPANY, NOS energy drink, Original, grape, loaded cherry, charged citrus, fortified with vitamins B6 and B12']
Cluster 18:
['Beverages, tea, instant, lemon, unsweetened'
'Beverages, tea, instant, unsweetened, powder'
'Beverages, tea, instant, lemon, with added ascorbic acid'
'Beverages, tea, instant, sweetened with sodium saccharin, lemon-flavored, powder'
'Beverages, tea, instant, lemon, diet']
Cluster 19:
['Beverages, carbonated, low calorie, other than cola or pepper, without caffeine'
'Beverages, carbonated, low calorie, other than cola or pepper, with aspartame, contains caffeine'
'Beverages, carbonated, pepper-type, contains caffeine'
'Carbonated beverage, low calorie, other than cola or pepper, with sodium saccharin, without caffeine'
'Beverages, carbonated, cola, without caffeine']
Cluster 20:
['Alcoholic beverage, tequila sunrise, canned'
'Alcoholic beverage, pina colada, canned'
'Alcoholic beverage, pina colada, prepared-from-recipe'
'Alcoholic beverage, daiquiri, canned'
'Alcoholic beverage, daiquiri, prepared-from-recipe']
Cluster 21:
['Beverages, tea, black, brewed, prepared with distilled water'
'Beverages, tea, black, brewed, prepared with tap water'
'Beverages, coffee, brewed, prepared with tap water, decaffeinated'
'Beverages, tea, black, brewed, prepared with tap water, decaffeinated'
'Beverages, coffee, brewed, prepared with tap water']
Cluster 22:
['Beverages, shake, fast food, strawberry'
'Beverages, UNILEVER, SLIMFAST Shake Mix, powder, 3-2-1 Plan'
'Shake, fast food, vanilla'
'Beverages, UNILEVER, SLIMFAST Shake Mix, high protein, whey powder, 3-2-1 Plan,'
'Beverages, SLIMFAST, Meal replacement, High Protein Shake, Ready-To-Drink, 3-2-1 plan']
Cluster 23:
['Water, with corn syrup and/or sugar and low calorie sweetener, fruit flavored'
'Beverages, fruit punch juice drink, frozen concentrate'
'Beverages, Tropical Punch, ready-to-drink'
'Beverages, fruit punch drink, without added nutrients, canned'
'Beverages, Fruit punch drink, frozen concentrate']
Cluster 24:
['Beverages, Dairy drink mix, chocolate, reduced calorie, with low-calorie sweeteners, powder'
'Beverages, Cocoa mix, powder, prepared with water'
'Beverages, Cocoa mix, low calorie, powder, with added calcium, phosphorus, aspartame, without added sodium or vitamin A'
'Cocoa mix, NESTLE, Rich Chocolate Hot Cocoa Mix'
'Beverages, fruit-flavored drink, dry powdered mix, low calorie, with aspartame']
Cluster 25:
['Beverages, coffee, instant, mocha, sweetened'
'Beverages, coffee, brewed, espresso, restaurant-prepared'
'Beverages, coffee, instant, regular, half the caffeine'
'Beverages, coffee, instant, decaffeinated, powder'
'Beverages, coffee, instant, vanilla, sweetened, decaffeinated, with non dairy creamer']
Cluster 26:
['Beverages, tea, green, brewed, regular'
'Beverages, tea, green, brewed, decaffeinated'
'Beverages, tea, herb, brewed, chamomile'
'Beverages, tea, hibiscus, brewed' 'Beverages, tea, Oolong, brewed']
Cluster 27:
['Beverages, MONSTER energy drink, low carb'
'Beverages, Energy drink, AMP, sugar free'
'Beverages, Energy Drink, sugar free' 'Beverages, Energy drink, AMP'
'Beverages, Energy drink, VAULT, citrus flavor']
Cluster 28:
['Alcoholic beverage, liqueur, coffee, 63 proof'
'Alcoholic beverage, liqueur, coffee with cream, 34 proof'
'Alcoholic beverage, creme de menthe, 72 proof'
'Alcoholic beverage, liqueur, coffee, 53 proof']
Cluster 29:
['Beverages, carbonated, orange' 'Beverages, carbonated, root beer'
'Beverages, Horchata, as served in restaurant'
'Beverages, carbonated, cola, regular'
'Beverages, carbonated, ginger ale']
Cluster 30:
['Beverages, AMBER, hard cider' 'Beverages, Malt liquor beverage'
'Malt beverage, includes non-alcoholic beer'
'Alcoholic beverage, malt beer, hard lemonade']
Finfish and Shellfish Products:
Number of items: 264
Optimal clusters: 5
Silhouette score: 0.198
Sample foods from each Finfish and Shellfish Products cluster:
Cluster 0:
['Fish, snapper, mixed species, raw' 'Fish, mackerel, king, raw'
'Fish, salmon, sockeye, raw' 'Fish, tuna, fresh, bluefin, raw'
'Fish, herring, Atlantic, raw']
Cluster 1:
['Fish, turbot, european, cooked, dry heat'
'Fish, pollock, Alaska, cooked, dry heat (may contain additives to retain moisture)'
'Fish, mackerel, king, cooked, dry heat'
'Fish, salmon, chinook, cooked, dry heat'
'Fish, sheepshead, cooked, dry heat']
Cluster 2:
['Fish, cod, Atlantic, canned, solids and liquid'
"Fish, tuna, light, canned in water, drained solids (Includes foods for USDA's Food Distribution Program)"
'Fish, salmon, pink, canned, drained solids'
'Fish, tuna, light, canned in water, without salt, drained solids'
'Fish, anchovy, european, canned in oil, drained solids']
Cluster 3:
['Mollusks, squid, mixed species, cooked, fried'
'Mollusks, abalone, mixed species, raw'
'Mollusks, scallop, mixed species, cooked, breaded and fried'
'Mollusks, scallop, (bay and sea), cooked, steamed'
'Mollusks, oyster, eastern, farmed, cooked, dry heat']
Cluster 4:
['Crustaceans, crab, queen, cooked, moist heat'
'Crustaceans, shrimp, mixed species, cooked, breaded and fried'
'Crustaceans, crayfish, mixed species, farmed, cooked, moist heat'
'Crustaceans, crab, blue, raw'
'Crustaceans, crayfish, mixed species, wild, raw']
Legumes and Legume Products:
Number of items: 289
Optimal clusters: 28
Silhouette score: 0.234
Sample foods from each Legumes and Legume Products cluster:
Cluster 0:
['Natto' 'Miso']
Cluster 1:
['Beans, kidney, royal red, mature seeds, cooked, boiled, without salt'
'Beans, black, mature seeds, cooked, boiled, with salt'
'Beans, kidney, royal red, mature seeds, cooked, boiled with salt'
'Beans, kidney, california red, mature seeds, cooked, boiled, without salt'
'Beans, cranberry (roman), mature seeds, cooked, boiled, without salt']
Cluster 2:
['SILK Plus Omega-3 DHA, soymilk' 'SILK Banana-Strawberry soy yogurt'
'SILK Unsweetened, soymilk' 'SILK Very Vanilla, soymilk'
'SILK Vanilla soy yogurt (family size)']
Cluster 3:
['Peanut butter with omega-3, creamy' 'Peanut spread, reduced sugar'
'Peanut butter, chunky, vitamin and mineral fortified'
'Peanut butter, smooth, reduced fat' 'Peanut flour, low fat']
Cluster 4:
['MORI-NU, Tofu, silken, soft' 'MORI-NU, Tofu, silken, lite extra firm'
'MORI-NU, Tofu, silken, extra firm' 'MORI-NU, Tofu, silken, firm'
'MORI-NU, Tofu, silken, lite firm']
Cluster 5:
['Carob flour' 'Soy flour, full-fat, raw' 'Peanut flour, defatted'
'Soy meal, defatted, raw' 'Soy flour, full-fat, roasted']
Cluster 6:
['Soymilk, chocolate, nonfat, with added calcium, vitamins A and D'
'Soymilk (all flavors), unsweetened, with added calcium, vitamins A and D'
'Soymilk, original and vanilla, light, with added calcium, vitamins A and D'
'Soymilk (all flavors), nonfat, with added calcium, vitamins A and D'
'Soymilk, chocolate, with added calcium, vitamins A and D']
Cluster 7:
['Peanuts, all types, dry-roasted, without salt'
'Peanuts, spanish, oil-roasted, with salt'
'Peanuts, all types, oil-roasted, without salt' 'Peanuts, virginia, raw'
'Peanuts, valencia, raw']
Cluster 8:
['Beans, black, mature seeds, canned, low sodium'
'Beans, navy, mature seeds, canned' 'Beans, white, mature seeds, canned'
'Beans, adzuki, mature seeds, canned, sweetened'
"Beans, great northern, mature seeds, raw (Includes foods for USDA's Food Distribution Program)"]
Cluster 9:
['Noodles, chinese, cellophane or long rice (mung beans), dehydrated'
'Vermicelli, made from soy']
Cluster 10:
['Bacon bits, meatless' 'Tempeh, cooked' 'Bacon, meatless' 'Meat extender'
'Vegetarian fillets']
Cluster 11:
['Tofu, soft, prepared with calcium sulfate and magnesium chloride (nigari)'
'Tofu, dried-frozen (koyadofu)'
'Tofu, raw, regular, prepared with calcium sulfate'
'Tofu, dried-frozen (koyadofu), prepared with calcium sulfate'
'Tofu, hard, prepared with nigari']
Cluster 12:
['Beans, kidney, red, mature seeds, raw'
'Beans, kidney, royal red, mature seeds, raw'
'Beans, black turtle, mature seeds, raw'
'Beans, small white, mature seeds, raw' 'Beans, navy, mature seeds, raw']
Cluster 13:
['Soy protein concentrate, produced by alcohol extraction'
'Soy protein isolate' 'Soy protein concentrate, produced by acid wash'
'Soy protein isolate, potassium type']
Cluster 14:
['Cowpeas, common (blackeyes, crowder, southern), mature seeds, cooked, boiled, with salt'
'Cowpeas, common (blackeyes, crowder, southern), mature seeds, raw'
'Cowpeas, catjang, mature seeds, raw'
'Cowpeas, common (blackeyes, crowder, southern), mature seeds, canned, plain'
'Cowpeas, common (blackeyes, crowder, southern), mature seeds, canned with pork']
Cluster 15:
['Vitasoy USA Organic Nasoya, Tofu Plus Firm'
'Vitasoy USA, Vitasoy Organic Creamy Original Soymilk'
'Vitasoy USA Organic Nasoya, Tofu Plus Extra Firm'
'HOUSE FOODS Premium Firm Tofu' 'HOUSE FOODS Premium Soft Tofu']
Cluster 16:
['Chickpeas (garbanzo beans, bengal gram), mature seeds, cooked, boiled, without salt'
'Chickpea flour (besan)'
'Chickpeas (garbanzo beans, bengal gram), mature seeds, cooked, boiled, with salt'
'Chickpeas (garbanzo beans, bengal gram), mature seeds, canned, solids and liquids, low sodium'
'Chickpeas (garbanzo beans, bengal gram), mature seeds, canned, drained solids']
Cluster 17:
['Refried beans, canned, fat-free'
'Refried beans, canned, traditional, reduced sodium'
'Frijoles rojos volteados (Refried beans, red, canned)'
'Refried beans, canned, vegetarian' 'Chili with beans, canned']
Cluster 18:
['Hummus, home prepared' 'Hummus, commercial' 'Falafel, home-prepared'
'Papad']
Cluster 19:
['Broadbeans (fava beans), mature seeds, cooked, boiled, without salt'
'Broadbeans (fava beans), mature seeds, canned'
'Broadbeans (fava beans), mature seeds, raw'
'Winged beans, mature seeds, cooked, boiled, without salt'
'Broadbeans (fava beans), mature seeds, cooked, boiled, with salt']
Cluster 20:
['Beans, baked, canned, no salt added'
'Beans, baked, canned, with pork and sweet sauce'
'Beans, baked, canned, with pork and tomato sauce'
'Beans, baked, canned, plain or vegetarian'
'Beans, baked, canned, with franks']
Cluster 21:
['Lupins, mature seeds, cooked, boiled, with salt'
'Lentils, mature seeds, cooked, boiled, with salt'
'Lentils, mature seeds, cooked, boiled, without salt'
'Peas, split, mature seeds, cooked, boiled, without salt'
'Peas, split, mature seeds, cooked, boiled, with salt']
Cluster 22:
['Beans, kidney, red, mature seeds, canned, solids and liquid, low sodium'
'Beans, kidney, red, mature seeds, canned, drained solids'
'Beans, pinto, mature seeds, canned, solids and liquids'
'Beans, pinto, mature seeds, canned, solids and liquids, low sodium'
'Beans, kidney, red, mature seeds, canned, drained solids, rinsed in tap water']
Cluster 23:
['Mung beans, mature seeds, cooked, boiled, without salt'
'Mung beans, mature seeds, cooked, boiled, with salt'
'Soybeans, mature seeds, cooked, boiled, with salt'
'Mothbeans, mature seeds, cooked, boiled, with salt'
'Hyacinth beans, mature seeds, cooked, boiled, without salt']
Cluster 24:
['Lima beans, large, mature seeds, raw'
'Lima beans, large, mature seeds, canned'
'Lima beans, thin seeded (baby), mature seeds, raw'
'Lima beans, thin seeded (baby), mature seeds, cooked, boiled, with salt'
'Lima beans, thin seeded (baby), mature seeds, cooked, boiled, without salt']
Cluster 25:
['Soybeans, mature seeds, roasted, no salt added'
'Soybeans, mature seeds, roasted, salted' 'Soybean, curd cheese'
'Soybeans, mature seeds, dry roasted' 'Soybeans, mature seeds, raw']
Cluster 26:
['Soy sauce made from soy and wheat (shoyu), low sodium'
'Soy sauce, reduced sodium, made from hydrolyzed vegetable protein'
'Soy sauce made from hydrolyzed vegetable protein'
'Soy sauce made from soy (tamari)'
'Soy sauce made from soy and wheat (shoyu)']
Cluster 27:
['Lentils, raw' 'Lupins, mature seeds, raw' 'Lentils, pink or red, raw'
'Winged beans, mature seeds, raw' 'Hyacinth beans, mature seeds, raw']
Lamb, Veal, and Game Products:
Number of items: 462
Optimal clusters: 2
Silhouette score: 0.288
Sample foods from each Lamb, Veal, and Game Products cluster:
Cluster 0:
['Game meat, muskrat, cooked, roasted'
'Game meat, deer, ground, cooked, pan-broiled'
'Game meat, elk, ground, raw' 'Game meat, raccoon, cooked, roasted'
'Bison, ground, grass-fed, cooked']
Cluster 1:
['Lamb, shoulder, whole (arm and blade), separable lean and fat, trimmed to 1/4" fat, choice, cooked, roasted'
'Lamb, New Zealand, imported, frozen, loin, separable lean only, cooked, broiled'
'Lamb, New Zealand, imported, testes, cooked, soaked and fried'
'Lamb, Australian, imported, fresh, leg, sirloin chops, boneless, separable lean and fat, trimmed to 1/8" fat, raw'
'Veal, cubed for stew (leg and shoulder), separable lean only, raw']
Baked Products:
Number of items: 513
Optimal clusters: 29
Silhouette score: 0.155
Sample foods from each Baked Products cluster:
Cluster 0:
['George Weston Bakeries, Thomas English Muffins'
'English muffins, plain, unenriched, with calcium propionate (includes sourdough)'
'Popovers, dry mix, enriched'
'Muffins, English, plain, toasted, enriched, with calcium propionate (includes sourdough)'
'Popovers, dry mix, unenriched']
Cluster 1:
['Cake, coffeecake, cheese'
'Cake, white, prepared from recipe without frosting'
'Cake, pudding-type, chocolate, dry mix' 'Cake, yellow, light, dry mix'
'Cake, coffeecake, fruit']
Cluster 2:
['Bread, rice bran, toasted' 'Bread, oat bran' 'Bread, oatmeal, toasted'
'Bread, oatmeal' 'Bread, reduced-calorie, oatmeal']
Cluster 3:
['Cookies, graham crackers, chocolate-coated' 'Cookies, raisin, soft-type'
'Cookies, fig bars' 'Cookies, coconut macaroon'
'Heinz, Weight Watcher, Chocolate Eclair, frozen']
Cluster 4:
['Pie, apple, commercially prepared, unenriched flour'
'Pie, pecan, commercially prepared' 'Pie, cherry, prepared from recipe'
'Pie, Dutch Apple, Commercially Prepared'
'Pie, coconut cream, prepared from mix, no-bake type']
Cluster 5:
['Crackers, whole-wheat, reduced fat' 'Crackers, wheat, low salt'
'Pepperidge Farm, Goldfish, Baked Snack Crackers, Parmesan'
'Pepperidge Farm, Goldfish, Baked Snack Crackers, Cheddar'
'Crackers, rye, wafers, plain']
Cluster 6:
['Bread, white wheat' 'Bread, reduced-calorie, rye'
'Bread, raisin, enriched'
'Bread, white, commercially prepared, low sodium, no salt'
'Bread, pita, white, enriched']
Cluster 7:
['Sage Valley, Gluten Free Vanilla Sandwich Cookies'
'Glutino, Gluten Free Wafers, Lemon Flavored'
'Glutino, Gluten Free Cookies, Vanilla Creme'
'Cookies, gluten-free, lemon wafer'
'Glutino, Gluten Free Wafers, Milk Chocolate']
Cluster 8:
['Pie crust, standard-type, dry mix, prepared, baked'
'Pie crust, refrigerated, regular, baked'
'Pie crust, standard-type, frozen, ready-to-bake, enriched, baked'
'Puff pastry, frozen, ready-to-bake, baked'
'Pie crust, cookie-type, prepared from recipe, graham cracker, chilled']
Cluster 9:
['Archway Home Style Cookies, Oatmeal Raisin'
'Archway Home Style Cookies, Coconut Macaroon'
'Archway Home Style Cookies, Peanut Butter'
'Archway Home Style Cookies, Strawberry Filled'
'Archway Home Style Cookies, Iced Molasses']
Cluster 10:
['Muffins, corn, toaster-type' 'Toaster pastries, brown-sugar-cinnamon'
'Toaster Pastries, fruit, frosted (include apples, blueberry, cherry, strawberry)'
'Toaster pastries, fruit (includes apple, blueberry, cherry, strawberry)'
'Toaster pastries, fruit, toasted (include apple, blueberry, cherry, strawberry)']
Cluster 11:
['Cookies, sugar wafer, with creme filling, sugar free'
'Cookies, vanilla sandwich with creme filling'
'Cookies, gluten-free, chocolate sandwich, with creme filling'
'Cookies, vanilla sandwich with creme filling, reduced fat'
'Cookies, sugar wafers with creme filling, regular']
Cluster 12:
['Cookies, chocolate chip, commercially prepared, regular, lower fat'
'Cookies, chocolate chip, prepared from recipe, made with butter'
'Cookies, chocolate chip, commercially prepared, soft-type'
'Cookies, peanut butter, commercially prepared, soft-type'
'Cookies, oatmeal, commercially prepared, special dietary']
Cluster 13:
['Tortillas, ready-to-bake or -fry, flour, without added calcium'
'Tortillas, ready-to-bake or -fry, whole wheat'
'Tortillas, ready-to-bake or -fry, corn'
'Tortillas, ready-to-bake or -fry, corn, without added salt'
'Tortillas, ready-to-bake or -fry, flour, shelf stable']
Cluster 14:
["Van's, Gluten Free, Totally Original Pancakes"
"Van's, Gluten Free, Totally Original Waffles"
"Van's, The Perfect 10, Crispy Six Whole Grain + Four Seed Baked Crackers, Gluten Free"]
Cluster 15:
['Pillsbury Grands, Buttermilk Biscuits, refrigerated dough'
'Biscuits, mixed grain, refrigerated dough'
'Biscuits, plain or buttermilk, refrigerated dough, higher fat'
'Pillsbury, Chocolate Chip Cookies, refrigerated dough'
'Biscuits, plain or buttermilk, refrigerated dough, lower fat']
Cluster 16:
['Rolls, dinner, plain, commercially prepared (includes brown-and-serve)'
'Wonton wrappers (includes egg roll wrappers)' 'Rolls, pumpernickel'
'Rolls, hamburger, whole grain white, calcium-fortified'
'Rolls, hamburger or hot dog, whole wheat']
Cluster 17:
['Croissants, apple' 'Croissants, cheese' 'Sweet rolls, cheese'
'Croissants, butter']
Cluster 18:
['Pancakes, plain, dry mix, incomplete, prepared'
'Pancakes, blueberry, prepared from recipe'
'Pancakes, plain, dry mix, complete (includes buttermilk)'
'Pancakes, plain, dry mix, complete, prepared'
'Pancakes, buckwheat, dry mix, incomplete']
Cluster 19:
['Bread, wheat, sprouted, toasted' 'Bread, protein (includes gluten)'
'Bread, rye, toasted' 'Bread, egg, toasted'
'Bread, french or vienna, toasted (includes sourdough)']
Cluster 20:
['Waffle, buttermilk, frozen, ready-to-heat, microwaved'
'Pancakes, plain, frozen, ready-to-heat, microwave (includes buttermilk)'
'Waffle, plain, frozen, ready-to-heat, microwave'
'Waffle, buttermilk, frozen, ready-to-heat, toasted'
'Waffles, plain, frozen, ready -to-heat, toasted']
Cluster 21:
['Schar, Gluten-Free, Wheat-Free, Classic White Bread'
'Bread, gluten-free, whole grain, made with tapioca starch and brown rice flour'
'Rolls, gluten-free, white, made with rice flour, rice starch, and corn starch'
'Rolls, gluten-free, whole grain, made with tapioca starch and brown rice flour'
"Udi's, Gluten Free, Whole Grain Dinner Rolls"]
Cluster 22:
['Tart, breakfast, low fat'
'Muffins, blueberry, commercially prepared (Includes mini-muffins)'
'Muffins, blueberry, prepared from recipe, made with low fat (2%) milk'
'Bread, cornbread, prepared from recipe, made with low fat (2%) milk'
'Muffins, corn, prepared from recipe, made with low fat (2%) milk']
Cluster 23:
['Danish pastry, raspberry, unenriched'
'Danish pastry, fruit, unenriched (includes apple, cinnamon, raisin, strawberry)'
'Danish pastry, cinnamon, enriched' 'Danish pastry, cheese'
'Danish pastry, lemon, unenriched']
Cluster 24:
['Bagels, whole grain white' 'Bagels, multigrain'
'Bagels, cinnamon-raisin, toasted'
'Bagels, plain, unenriched, with calcium propionate (includes onion, poppy, sesame)'
'Bagels, oat bran']
Cluster 25:
['Bread, whole-wheat, prepared from recipe'
'Bread, whole-wheat, commercially prepared'
'Bread, white, prepared from recipe, made with nonfat dry milk'
'Bread, paratha, whole wheat, commercially prepared, frozen'
'Bread, pita, whole-wheat']
Cluster 26:
['Bread, wheat' 'Bread, cinnamon'
'Bread, salvadoran sweet cheese (quesadilla salvadorena)' 'Bread, cheese'
'Bread, roll, Mexican, bollilo']
Cluster 27:
['Continental Mills, Krusteaz Almond Poppyseed Muffin Mix, Artificially Flavored, dry'
'Bread, stuffing, cornbread, dry mix'
'Bread, crumbs, dry, grated, seasoned' 'Muffins, wheat bran, dry mix'
'Bread, stuffing, cornbread, dry mix, prepared']
Cluster 28:
['Doughnuts, cake-type, chocolate, sugared or glazed'
'Cream puff, eclair, custard or cream filled, iced'
'Doughnuts, cake-type, plain, sugared or glazed'
'Doughnuts, cake-type, plain, chocolate-coated or frosted'
'Doughnuts, yeast-leavened, with creme filling']
Sweets:
Number of items: 358
Optimal clusters: 4
Silhouette score: 0.168
Sample foods from each Sweets cluster:
Cluster 0:
['Desserts, rennin, tablets, unsweetened' 'Cocoa, dry powder, unsweetened'
'Milk dessert, frozen, milk-fat free, chocolate'
'Puddings, chocolate, dry mix, instant, prepared with whole milk'
'Egg custards, dry mix, prepared with whole milk']
Cluster 1:
['Syrups, corn, dark' 'Pie fillings, blueberry, canned'
'Marmalade, orange' 'Syrups, maple' 'Jams and preserves, apricot']
Cluster 2:
['Candies, NESTLE, CHUNKY Bar'
'Candies, chocolate covered, caramel with nuts'
"Candies, REESE'S NUTRAGEOUS Candy Bar" 'Candies, Tamarind'
'Candies, milk chocolate coated raisins']
Cluster 3:
['Ice creams, strawberry'
'Ice creams, BREYERS, All Natural Light French Vanilla'
'Frozen novelties, juice type, POPSICLE SCRIBBLERS'
'Ice creams, BREYERS, 98% Fat Free Vanilla'
'Frozen novelties, ice type, pop']
Cereal Grains and Pasta:
Number of items: 181
Optimal clusters: 32
Silhouette score: 0.234
Sample foods from each Cereal Grains and Pasta cluster:
Cluster 0:
['Wheat, durum' 'Triticale']
Cluster 1:
['Cornmeal, yellow, self-rising, degermed, enriched'
'Cornmeal, yellow, self-rising, bolted, with wheat flour added, enriched'
'Cornmeal, white, self-rising, bolted, with wheat flour added, enriched'
'Cornmeal, white, self-rising, degermed, enriched'
'Cornmeal, white, self-rising, bolted, plain, enriched']
Cluster 2:
['Rice noodles, cooked' 'Spaghetti, spinach, dry'
'Spaghetti, spinach, cooked']
Cluster 3:
["Rice, brown, medium-grain, cooked (Includes foods for USDA's Food Distribution Program)"
"Rice, brown, medium-grain, raw (Includes foods for USDA's Food Distribution Program)"
"Rice, brown, long-grain, raw (Includes foods for USDA's Food Distribution Program)"
"Rice, brown, long-grain, cooked (Includes foods for USDA's Food Distribution Program)"]
Cluster 4:
['Rye flour, medium' 'Rye flour, light' 'Rice flour, brown' 'Rye grain'
'Rye flour, dark']
Cluster 5:
['Rice, white, long-grain, precooked or instant, enriched, dry'
'Rice, white, long-grain, regular, enriched, cooked'
'Rice, white, long-grain, regular, cooked, unenriched, with salt'
'Rice, white, medium-grain, enriched, cooked'
'Rice, white, short-grain, raw, unenriched']
Cluster 6:
['Noodles, japanese, soba, dry' 'Rice noodles, dry'
'Noodles, japanese, soba, cooked'
'Noodles, flat, crunchy, Chinese restaurant'
'Noodles, chinese, chow mein']
Cluster 7:
['Wheat flour, white, all-purpose, self-rising, enriched'
'Wheat flour, white, all-purpose, enriched, unbleached'
'Wheat flour, white, tortilla mix, enriched'
'Wheat flour, white, all-purpose, enriched, calcium-fortified'
'Wheat flours, bread, unenriched']
Cluster 8:
['Corn bran, crude' 'Rice bran, crude' 'Wheat bran, crude'
'Wheat germ, crude']
Cluster 9:
['Pasta, homemade, made without egg, cooked'
'Pasta, homemade, made with egg, cooked']
Cluster 10:
['Noodles, egg, unenriched, cooked, without added salt'
'Noodles, egg, spinach, enriched, cooked'
'Noodles, egg, enriched, cooked' 'Noodles, egg, dry, enriched'
'Noodles, egg, cooked, unenriched, with added salt']
Cluster 11:
['Wheat flour, white (industrial), 13% protein, bleached, unenriched'
'Wheat flour, white (industrial), 9% protein, bleached, enriched'
'Vital wheat gluten'
'Wheat flour, white (industrial), 15% protein, bleached, enriched'
'Wheat flour, white (industrial), 11.5% protein, bleached, enriched']
Cluster 12:
['Buckwheat groats, roasted, cooked' 'Buckwheat groats, roasted, dry'
'Buckwheat' 'Buckwheat flour, whole-groat']
Cluster 13:
['Semolina, unenriched' 'Cornmeal, degermed, unenriched, yellow'
'Cornmeal, whole-grain, yellow' 'Cornmeal, degermed, enriched, white'
'Corn grain, yellow']
Cluster 14:
['Pasta, gluten-free, brown rice flour, cooked, TINKYADA'
'Pasta, gluten-free, rice flour and rice bran extract, cooked, DE BOLES'
'Pasta, gluten-free, corn, dry'
'Pasta, gluten-free, corn flour and quinoa flour, cooked, ANCIENT HARVEST'
'Pasta, gluten-free, corn, cooked']
Cluster 15:
['Quinoa, cooked' 'Teff, cooked' 'Spelt, uncooked' 'Spelt, cooked']
Cluster 16:
['Corn flour, whole-grain, blue (harina de maiz morado)'
'Corn flour, whole-grain, white' 'Corn grain, white'
'Wheat flour, whole-grain, soft wheat' 'Triticale flour, whole-grain']
Cluster 17:
["Pasta, whole-wheat, cooked (Includes foods for USDA's Food Distribution Program)"
"Wheat flour, whole-grain (Includes foods for USDA's Food Distribution Program)"
"Oats (Includes foods for USDA's Food Distribution Program)"
"Pasta, whole-wheat, dry (Includes foods for USDA's Food Distribution Program)"]
Cluster 18:
['Wheat, KAMUT khorasan, cooked' 'Teff, uncooked'
'Wheat, KAMUT khorasan, uncooked' 'Wheat, sprouted']
Cluster 19:
['Pasta, fresh-refrigerated, plain, cooked'
'Pasta, fresh-refrigerated, plain, as purchased'
'Pasta, fresh-refrigerated, spinach, cooked'
'Pasta, fresh-refrigerated, spinach, as purchased']
Cluster 20:
['Wheat, hard red winter' 'Wheat, hard white' 'Wheat, soft white'
'Wheat, hard red spring' 'Wheat, soft red winter']
Cluster 21:
['Oat bran, raw' 'Oat bran, cooked' 'Oat flour, partially debranned']
Cluster 22:
['Millet, cooked' 'Millet, raw' 'Millet flour']
Cluster 23:
['Spaghetti, protein-fortified, dry, enriched (n x 6.25)'
'Pasta, cooked, unenriched, with added salt' 'Pasta, dry, enriched'
'Pasta, cooked, unenriched, without added salt'
'Macaroni, vegetable, enriched, dry']
Cluster 24:
['Hominy, canned, white' 'Hominy, canned, yellow']
Cluster 25:
['Bulgur, dry' 'Bulgur, cooked']
Cluster 26:
['Amaranth grain, cooked' 'Amaranth grain, uncooked' 'Quinoa, uncooked']
Cluster 27:
['Cornstarch' 'Corn flour, masa, enriched, white'
'Corn flour, masa, unenriched, white'
'Corn flour, yellow, masa, enriched']
Cluster 28:
["Pasta, whole grain, 51% whole wheat, remaining enriched semolina, dry (Includes foods for USDA's Food Distribution Program)"
"Pasta, whole grain, 51% whole wheat, remaining enriched semolina, cooked (Includes foods for USDA's Food Distribution Program)"
'Pasta, whole grain, 51% whole wheat, remaining unenriched semolina, cooked'
'Pasta, whole grain, 51% whole wheat, remaining unenriched semolina, dry']
Cluster 29:
['Barley, pearled, cooked' 'Barley, hulled' 'Tapioca, pearl, dry'
'Barley, pearled, raw']
Cluster 30:
['Sorghum flour, refined, unenriched' 'Barley malt flour' 'Sorghum grain'
'Sorghum flour, whole-grain']
Cluster 31:
['Couscous, cooked' 'Couscous, dry']
Fast Foods:
Number of items: 312
Optimal clusters: 30
Silhouette score: 0.244
Sample foods from each Fast Foods cluster:
Cluster 0:
['TACO BELL, Soft Taco with chicken, cheese and lettuce'
'TACO BELL, Original Taco with beef, cheese and lettuce'
'TACO BELL, BURRITO SUPREME with steak' 'TACO BELL, Bean Burrito'
'TACO BELL, BURRITO SUPREME with chicken']
Cluster 1:
["BURGER KING, CROISSAN'WICH with Sausage and Cheese"
"BURGER KING, CROISSAN'WICH with Sausage, Egg and Cheese"
"BURGER KING, CROISSAN'WICH with Egg and Cheese"]
Cluster 2:
['KFC, Fried Chicken, ORIGINAL RECIPE, Skin and Breading'
'KFC, Fried Chicken, EXTRA CRISPY, Breast, meat and skin with breading'
'KFC, Fried Chicken, EXTRA CRISPY, Thigh, meat only, skin and breading removed'
'KFC, Fried Chicken, EXTRA CRISPY, Breast, meat only, skin and breading removed'
'KFC, Fried Chicken, ORIGINAL RECIPE, Thigh, meat and skin with breading']
Cluster 3:
['Fast Food, Pizza Chain, 14" pizza, cheese topping, regular crust'
'Fast Food, Pizza Chain, 14" pizza, pepperoni topping, thick crust'
'Fast Food, Pizza Chain, 14" pizza, cheese topping, thin crust'
'Fast Food, Pizza Chain, 14" pizza, sausage topping, regular crust'
'Fast Food, Pizza Chain, 14" pizza, pepperoni topping, regular crust']
Cluster 4:
['SUBWAY, cold cut sub on white bread with lettuce and tomato'
'SUBWAY, black forest ham sub on white bread with lettuce and tomato'
'SUBWAY, roast beef sub on white bread with lettuce and tomato'
'SUBWAY, meatball marinara sub on white bread (no toppings)'
'SUBWAY, turkey breast sub on white bread with lettuce and tomato']
Cluster 5:
['Fast foods, chicken fillet sandwich, plain with pickles'
'Fast foods, fish sandwich, with tartar sauce'
'Fast foods, potato, mashed'
'Fast foods, potatoes, hash browns, round pieces or patty'
'Fast foods, breadstick, soft, prepared with garlic and parmesan cheese']
Cluster 6:
["McDONALD'S, Fruit 'n Yogurt Parfait (without granola)"
"McDONALD'S, Fruit 'n Yogurt Parfait"
"McDONALD'S, McFLURRY with M&M'S CANDIES"
"McDONALD'S, Hot Caramel Sundae" "McDONALD'S, Side Salad"]
Cluster 7:
['School Lunch, pizza, BIG DADDY\'S LS 16" 51% Whole Grain Rolled Edge Turkey Pepperoni Pizza, frozen'
"School Lunch, pizza, TONY'S SMARTPIZZA Whole Grain 4x6 Cheese Pizza 50/50 Cheese, frozen"
"School Lunch, pizza, TONY'S Breakfast Pizza Sausage, frozen"
'School Lunch, chicken nuggets, whole grain breaded'
'School Lunch, chicken patty, whole grain breaded']
Cluster 8:
['Fast foods, cheeseburger; single, large patty; plain'
'Fast foods, hamburger; double, large patty; with condiments, vegetables and mayonnaise'
'Fast foods, cheeseburger; single, large patty; with condiments'
'Fast foods, cheeseburger; double, regular patty; double decker bun with condiments and special sauce'
'Fast foods, cheeseburger; double, regular patty; with condiments']
Cluster 9:
['Fast foods, bagel, with egg, sausage patty, cheese, and condiments'
'Fast foods, french toast sticks'
'Fast foods, bagel, with breakfast steak, egg, cheese, and condiments'
'Fast foods, miniature cinnamon rolls'
'Fast foods, croissant, with egg, cheese, and sausage']
Cluster 10:
["WENDY'S, Frosty Dairy Dessert" "WENDY'S, Chicken Nuggets"
"WENDY'S, french fries"]
Cluster 11:
['PIZZA HUT 12" Cheese Pizza, THIN \'N CRISPY Crust'
'PIZZA HUT 12" Cheese Pizza, Hand-Tossed Crust'
'PIZZA HUT 14" Pepperoni Pizza, Pan Crust'
'PIZZA HUT 14" Cheese Pizza, THIN \'N CRISPY Crust'
'PIZZA HUT 12" Super Supreme Pizza, Hand-Tossed Crust']
Cluster 12:
['Fast foods, taco with beef, cheese and lettuce, hard shell'
'Fast foods, breakfast burrito, with egg, cheese, and sausage'
'Fast foods, nachos, with cheese, beans, ground beef, and tomatoes'
'Fast foods, burrito, with beans, cheese, and beef'
'Fast foods, burrito, with beans']
Cluster 13:
['Fast Foods, Fried Chicken, Wing, meat only, skin and breading removed'
'Fast foods, onion rings, breaded and fried'
'Fast Foods, Fried Chicken, Breast, meat only, skin and breading removed'
'Fast Foods, Fried Chicken, Thigh, meat and skin and breading'
'Fast Foods, Fried Chicken, Wing, meat and skin and breading']
Cluster 14:
['POPEYES, biscuit'
'POPEYES, Fried Chicken, Mild, Breast, meat only, skin and breading removed'
'POPEYES, Fried Chicken, Mild, Drumstick, meat and skin with breading'
'POPEYES, Coleslaw'
'POPEYES, Fried Chicken, Mild, Thigh, meat and skin with breading']
Cluster 15:
['DIGIORNO Pizza, pepperoni topping, thin crispy crust, frozen, baked'
'DIGIORNO Pizza, cheese topping, rising crust, frozen, baked'
'DIGIORNO Pizza, cheese topping, cheese stuffed crust, frozen, baked'
'DIGIORNO Pizza, supreme topping, thin crispy crust, frozen, baked'
'DIGIORNO Pizza, pepperoni topping, rising crust, frozen, baked']
Cluster 16:
['Fast foods, submarine sandwich, oven roasted chicken on white bread with lettuce and tomato'
'Fast foods, submarine sandwich, turkey, roast beef and ham on white bread with lettuce and tomato'
'Fast foods, submarine sandwich, turkey breast on white bread with lettuce and tomato'
'Fast foods, submarine sandwich, cold cut on white bread with lettuce and tomato'
'Fast foods, submarine sandwich, roast beef on white bread with lettuce and tomato']
Cluster 17:
['BURGER KING, Premium Fish Sandwich' 'BURGER KING, french fries'
'BURGER KING, Original Chicken Sandwich' 'BURGER KING, Cheeseburger'
'BURGER KING, Onion Rings']
Cluster 18:
['Pizza, cheese topping, regular crust, frozen, cooked'
'Pizza, pepperoni topping, regular crust, frozen, cooked'
'Pizza, meat topping, thick crust, frozen, cooked'
'Pizza, cheese topping, rising crust, frozen, cooked'
'Pizza, meat and vegetable topping, regular crust, frozen, cooked']
Cluster 19:
["McDONALD'S, Hotcakes (with 2 pats margarine & syrup)"
"McDONALD'S, Bacon Egg & Cheese Biscuit"
"McDONALD'S, Hotcakes and Sausage"
"McDONALD'S, Deluxe Breakfast, with syrup and margarine"
"McDONALD'S, Sausage Biscuit with Egg"]
Cluster 20:
['DOMINO\'S 14" EXTRAVAGANZZA FEAST Pizza, Classic Hand-Tossed Crust'
'DOMINO\'S 14" Sausage Pizza, Ultimate Deep Dish Crust'
'DOMINO\'S 14" Pepperoni Pizza, Classic Hand-Tossed Crust'
'DOMINO\'S 14" Sausage Pizza, Classic Hand-Tossed Crust'
'DOMINO\'S 14" Pepperoni Pizza, Crunchy Thin Crust']
Cluster 21:
["WENDY'S, Jr. Hamburger, with cheese"
"WENDY'S, Double Stack, with cheese"
"WENDY'S, DAVE'S Hot 'N Juicy 1/4 LB, single"
"WENDY'S, Homestyle Chicken Fillet Sandwich"
"WENDY'S, CLASSIC DOUBLE, with cheese"]
Cluster 22:
['Fast foods, sundae, caramel'
'Fast foods, strawberry banana smoothie made with ice and low-fat yogurt'
'Fast foods, sundae, strawberry'
'Light Ice Cream, soft serve, blended with cookie pieces'
'Fast foods, sundae, hot fudge']
Cluster 23:
['CHICK-FIL-A, hash browns' 'CHICK-FIL-A, Chick-n-Strips'
'CHICK-FIL-A, chicken sandwich']
Cluster 24:
['Fast Foods, grilled chicken filet sandwich, with lettuce, tomato and spread'
'Fast foods, crispy chicken, bacon, and tomato club sandwich, with cheese, lettuce, and mayonnaise'
'Fast foods, grilled chicken, bacon and tomato club sandwich, with cheese, lettuce, and mayonnaise'
'Fast foods, grilled chicken in tortilla, with lettuce, cheese, and ranch sauce'
'Fast foods, crispy chicken in tortilla, with lettuce, cheese, and ranch sauce']
Cluster 25:
['PAPA JOHN\'S 14" Cheese Pizza, Original Crust'
'LITTLE CAESARS 14" Original Round Cheese Pizza, Regular Crust'
'LITTLE CAESARS 14" Original Round Meat and Vegetable Pizza, Regular Crust'
'LITTLE CAESARS 14" Cheese Pizza, Large Deep Dish Crust'
'PAPA JOHN\'S 14" The Works Pizza, Original Crust']
Cluster 26:
['Fast foods, griddle cake sandwich, egg, cheese, and sausage'
'Fast foods, biscuit, with crispy chicken fillet'
'Fast foods, biscuit, with egg, cheese, and bacon'
'Fast foods, english muffin, with egg, cheese, and sausage'
'Fast foods, egg, scrambled']
Cluster 27:
['BURGER KING, WHOPPER, with cheese' 'BURGER KING, WHOPPER, no cheese'
'BURGER KING, DOUBLE WHOPPER, no cheese'
'BURGER KING, DOUBLE WHOPPER, with cheese'
'BURGER KING, Double Cheeseburger']
Cluster 28:
["McDONALD'S, RANCH SNACK WRAP, Crispy"
"McDONALD'S Bacon Ranch Salad with Crispy Chicken"
"McDONALD'S, RANCH SNACK WRAP, Grilled"
"McDONALD'S, Bacon Ranch Salad without chicken"
"McDONALD'S, Bacon Ranch Salad with Grilled Chicken"]
Cluster 29:
["McDONALD'S, Chicken McNUGGETS" "McDONALD'S, Double Cheeseburger"
"McDONALD'S, FILET-O-FISH" "McDONALD'S, QUARTER POUNDER with Cheese"
"McDONALD'S, french fries"]
Meals, Entrees, and Side Dishes:
Number of items: 81
Optimal clusters: 32
Silhouette score: 0.215
Sample foods from each Meals, Entrees, and Side Dishes cluster:
Cluster 0:
['Rice bowl with chicken, frozen entree, prepared (includes fried, teriyaki, and sweet and sour varieties)']
Cluster 1:
['Rice and vermicelli mix, chicken flavor, prepared with 80% margarine'
'Rice and vermicelli mix, rice pilaf flavor, unprepared'
'Rice and vermicelli mix, rice pilaf flavor, prepared with 80% margarine'
'Rice and vermicelli mix, beef flavor, unprepared'
'RICE-A-RONI, chicken flavor, unprepared']
Cluster 2:
['Lasagna with meat & sauce, low-fat, frozen entree'
'Spaghetti with meat sauce, frozen entree'
'Lasagna with meat & sauce, frozen entree'
'Beef macaroni with tomato sauce, frozen entree, reduced fat']
Cluster 3:
['Chicken, thighs, frozen, breaded, reheated'
'Chicken tenders, breaded, frozen, prepared']
Cluster 4:
['Macaroni and cheese, frozen entree' 'Macaroni and Cheese, canned entree'
'Macaroni and Cheese, canned, microwavable']
Cluster 5:
['Chili with beans, microwavable bowls'
'Burrito, beef and bean, microwaved']
Cluster 6:
['Turkey Pot Pie, frozen entree'
'Chicken pot pie, frozen entree, prepared'
'Beef Pot Pie, frozen entree, prepared' 'Beef stew, canned entree']
Cluster 7:
['Lasagna with meat sauce, frozen, prepared'
'Lasagna, Vegetable, frozen, baked' 'Lasagna, cheese, frozen, prepared'
'Lasagna, cheese, frozen, unprepared']
Cluster 8:
['Egg rolls, vegetable, frozen, prepared'
'Egg rolls, chicken, refrigerated, heated'
'Egg rolls, pork, refrigerated, heated' 'Pizza rolls, frozen, unprepared']
Cluster 9:
['HUNGRY MAN, Salisbury Steak With Gravy, frozen, unprepared'
'BANQUET, Salisbury Steak With Gravy, family size, frozen, unprepared'
'Salisbury steak with gravy, frozen']
Cluster 10:
['Beef, corned beef hash, with potato, canned']
Cluster 11:
['JIMMY DEAN, Sausage, Egg, and Cheese Breakfast Biscuit, frozen, unprepared'
'Sausage, egg and cheese breakfast biscuit']
Cluster 12:
["HOT POCKETS Ham 'N Cheese Stuffed Sandwich, frozen"
'LEAN POCKETS, Ham N Cheddar'
'HOT POCKETS, meatballs & mozzarella stuffed sandwich, frozen'
'HOT POCKETS, CROISSANT POCKETS Chicken, Broccoli, and Cheddar Stuffed Sandwich, frozen']
Cluster 13:
['Yellow rice with seasoning, dry packet mix, unprepared'
'Rice mix, cheese flavor, dry mix, unprepared'
'Rice mix, white and wild, flavored, unprepared'
'Spanish rice mix, dry mix, unprepared'
'Spanish rice mix, dry mix, prepared (with canola/vegetable oil blend or diced tomatoes and margarine)']
Cluster 14:
['Ravioli, meat-filled, with tomato sauce or meat sauce, canned'
'Ravioli, cheese-filled, canned'
'Ravioli, cheese with tomato sauce, frozen, not prepared, includes regular and light entrees']
Cluster 15:
['Pasta mix, classic cheeseburger macaroni, unprepared'
'Pasta mix, Italian four cheese lasagna, unprepared'
'Pasta mix, Italian lasagna, unprepared'
'Pasta mix, classic beef, unprepared']
Cluster 16:
['Potsticker or wonton, pork and vegetable, frozen, unprepared']
Cluster 17:
['Macaroni and cheese, box mix with cheese sauce, prepared'
'Macaroni and cheese, dry mix, prepared with 2% milk and 80% stick margarine from dry mix'
'Macaroni and cheese, box mix with cheese sauce, unprepared'
'Macaroni and cheese dinner with dry sauce mix, boxed, uncooked']
Cluster 18:
['Chili, no beans, canned entree'
'Chili con carne with beans, canned entree']
Cluster 19:
['Macaroni or noodles with cheese, microwaveable, unprepared'
'Macaroni or noodles with cheese, made from reduced fat packaged mix, unprepared']
Cluster 20:
['Potato salad with egg']
Cluster 21:
['Turnover, filled with egg, meat and cheese, frozen'
'Turnover, cheese-filled, tomato-based sauce, frozen, unprepared'
'Turnover, meat- and cheese-filled, tomato-based sauce, reduced fat, frozen'
'Turnover, chicken- or turkey-, and vegetable-filled, reduced fat, frozen']
Cluster 22:
['Taquitos, frozen, beef and cheese, oven-heated'
'Taquitos, frozen, chicken and cheese, oven-heated']
Cluster 23:
['Dumpling, potato- or cheese-filled, frozen']
Cluster 24:
['Spaghetti, with meatballs in tomato sauce, canned'
'Pasta with tomato sauce, no meat, canned'
'Pasta with Sliced Franks in Tomato Sauce, canned entree']
Cluster 25:
['Turkey, stuffing, mashed potatoes w/gravy, assorted vegetables, frozen, microwaved']
Cluster 26:
['Corn dogs, frozen, prepared']
Cluster 27:
['Tortellini, pasta with cheese filling, fresh-refrigerated, as purchased']
Cluster 28:
['Lean Pockets, Meatballs & Mozzarella']
Cluster 29:
['Chicken, nuggets, dark and white meat, precooked, frozen, not reheated'
'Chicken, nuggets, white meat, precooked, frozen, not reheated']
Cluster 30:
['Pulled pork in barbecue sauce']
Cluster 31:
['Burrito, bean and cheese, frozen' 'Burrito, beef and bean, frozen']
American Indian/Alaska Native Foods:
Number of items: 149
Optimal clusters: 2
Silhouette score: 0.180
Sample foods from each American Indian/Alaska Native Foods cluster:
Cluster 0:
['Whale, beluga, meat, raw (Alaska Native)'
'Fish, salmon, red, (sockeye), kippered (Alaska Native)'
'Seal, bearded (Oogruk), meat, partially dried (Alaska Native)'
'Stew/soup, caribou (Alaska Native)'
'Fish, salmon, chum, dried (Alaska Native)']
Cluster 1:
['Prairie Turnips, boiled (Northern Plains Indians)'
'Melon, banana (Navajo)' 'Pinon Nuts, roasted (Navajo)'
'Bread, blue corn, somiviki (Hopi)'
'Chokecherries, raw, pitted (Shoshone Bannock)']
Restaurant Foods:
Number of items: 109
Optimal clusters: 9
Silhouette score: 0.249
Sample foods from each Restaurant Foods cluster:
Cluster 0:
['CRACKER BARREL, steak fries'
'CRACKER BARREL, chicken tenderloin platter, fried'
'CRACKER BARREL, grilled sirloin steak' 'CRACKER BARREL, coleslaw'
'CRACKER BARREL, farm raised catfish platter']
Cluster 1:
['Restaurant, Chinese, vegetable lo mein, without meat'
'Restaurant, Chinese, beef and vegetables'
'Restaurant, Chinese, vegetable chow mein, without meat or noodles'
'Restaurant, Chinese, chicken and vegetables'
"Restaurant, Chinese, general tso's chicken"]
Cluster 2:
["DENNY'S, fish fillet, battered or breaded, fried"
"DENNY'S, chicken nuggets, star shaped, from kid's menu"
"DENNY'S, chicken strips" "DENNY'S, french fries" "DENNY'S, coleslaw"]
Cluster 3:
['Restaurant, Mexican, cheese quesadilla'
'Restaurant, Latino, arroz con leche (rice pudding)'
'Restaurant, Mexican, refried beans'
'Restaurant, Latino, empanadas, beef, prepared'
'Restaurant, Mexican, cheese tamales']
Cluster 4:
['Restaurant, family style, coleslaw'
"Restaurant, family style, chicken fingers, from kid's menu"
'Restaurant, family style, sirloin steak'
'Restaurant, family style, hash browns'
'Restaurant, family style, fried mozzarella sticks']
Cluster 5:
["CARRABBA'S ITALIAN GRILL, lasagne"
'Restaurant, Italian, chicken parmesan without pasta'
"CARRABBA'S ITALIAN GRILL, spaghetti with pomodoro sauce"
'Restaurant, Italian, lasagna with meat'
'OLIVE GARDEN, spaghetti with pomodoro sauce']
Cluster 6:
["APPLEBEE'S, fish, hand battered" "APPLEBEE'S, 9 oz house sirloin steak"
"APPLEBEE'S, mozzarella sticks" "APPLEBEE'S, chili"
"APPLEBEE'S, coleslaw"]
Cluster 7:
['ON THE BORDER, refried beans' 'ON THE BORDER, Mexican rice'
'ON THE BORDER, cheese enchilada' 'ON THE BORDER, cheese quesadilla'
'ON THE BORDER, soft taco with ground beef, cheese and lettuce']
Cluster 8:
["T.G.I. FRIDAY'S, FRIDAY'S Shrimp, breaded"
"T.G.I. FRIDAY'S, chicken fingers, from kids' menu"
"T.G.I. FRIDAY'S, french fries"
"T.G.I. FRIDAY'S, classic sirloin steak (10 oz)"
"T.G.I. FRIDAY'S, macaroni & cheese, from kid's menu"]
after append_semantic_embedding_clusters() imputed_food_rows (7713, 31)
imputed_food_rows['cluster'].unique().shape (9,)
Consolidate Rare Clusters#
Rare embedding clusters pose statistical and practical challenges:
Statistical validity: Small clusters (high skewness) risk unreliable predictions and overfitting in nutrient models.
Model stability: Sparse clusters introduce noise, reducing performance.
Practical utility: Clusters in <1% of samples often lack meaningful representation.
Solution: Combine rare clusters into an “Other” category or merge them with larger, semantically similar clusters. This approach preserves the most valuable subcategories while addressing statistical limitations.
By consolidating rare clusters, we hope our models become more robust, leveraging meaningful patterns without being skewed by sparse, unreliable data.
def consolidate_rare_clusters3(df, threshold=0.01):
"""
Replace rare cluster values with food_group values if they appear in less than threshold_pct% of rows.
Prints analysis of the changes made.
Parameters:
-----------
df : pandas.DataFrame
DataFrame containing 'cluster' and 'food_group' columns
threshold_pct : float, optional
Percentage threshold below which clusters are considered rare (default: 1)
Returns:
--------
pandas.DataFrame
DataFrame with modified cluster values
dict
Dictionary containing analysis metrics
"""
# Create a copy to avoid modifying the original
df_modified = df.copy()
# Calculate the threshold count
threshold_count = len(df) * (threshold)
# Get cluster value counts
cluster_counts = df['cluster'].value_counts()
rare_clusters = cluster_counts[cluster_counts < threshold_count].index.tolist()
# Store original stats
total_rows = len(df)
original_unique_clusters = len(cluster_counts)
rows_in_rare_clusters = cluster_counts[rare_clusters].sum()
# Replace rare clusters with food_group values
mask = df_modified['cluster'].isin(rare_clusters)
df_modified.loc[mask, 'cluster'] = df_modified.loc[mask, 'food_group']
# Calculate new stats
new_cluster_counts = df_modified['cluster'].value_counts()
final_unique_clusters = len(new_cluster_counts)
# Prepare analysis results
analysis = {
'total_rows': total_rows,
'original_unique_clusters': original_unique_clusters,
'final_unique_clusters': final_unique_clusters,
'clusters_removed': len(rare_clusters),
'rows_affected': rows_in_rare_clusters,
'rows_affected_pct': (rows_in_rare_clusters / total_rows) * 100,
'rare_clusters': rare_clusters
}
# Print analysis
print(f"Impact Analysis of Cluster Replacement:")
print(f"----------------------------------------")
print(f"Total rows in dataset: {analysis['total_rows']:,}")
print(f"Original unique clusters: {analysis['original_unique_clusters']:,}")
print(f"Clusters below {threshold*100}% threshold: {analysis['clusters_removed']:,}")
print(f"Rows affected by replacement: {analysis['rows_affected']:,} ({analysis['rows_affected_pct']:.2f}%)")
print(f"Final unique clusters: {analysis['final_unique_clusters']:,}")
print(f"\nRare clusters that were replaced:")
for cluster in rare_clusters:
count = cluster_counts[cluster]
pct = (count / total_rows) * 100
print(f"- {cluster}: {count:,} rows ({pct:.2f}%)")
return df_modified
result = run_once("consolidate_rare_clusters3", lambda: consolidate_rare_clusters3(imputed_food_rows))
if result is not None:
imputed_food_rows = result
Impact Analysis of Cluster Replacement:
----------------------------------------
Total rows in dataset: 7,713
Original unique clusters: 8
Clusters below 1.0% threshold: 7
Rows affected by replacement: 200 (2.59%)
Final unique clusters: 3
Rare clusters that were replaced:
- Dairy and Egg Products_3: 50 rows (0.65%)
- Dairy and Egg Products_0: 49 rows (0.64%)
- Dairy and Egg Products_2: 32 rows (0.41%)
- Dairy and Egg Products_4: 27 rows (0.35%)
- Dairy and Egg Products_5: 23 rows (0.30%)
- Baby Foods_0: 18 rows (0.23%)
- Baby Foods_1: 1 rows (0.01%)
Feature Encoding Pipeline#
This pipeline implements adaptive feature preprocessing based on statistical properties of nutrient measurements across food items:
For highly skewed nutrient features (|skewness| > 5): Applies winsorization → Yeo-Johnson transform → Robust scaling
For moderately skewed features (|skewness| > 2): Applies winsorization → Log transform → Robust scaling
For heavy-tailed features (kurtosis > 10): Applies winsorization → Robust scaling with wider quantile range
For normally distributed features: Applies winsorization → Standard scaling
The pipeline preserves food name embeddings without transformation, one-hot encodes the embedding-based clusters, and handles missing values.
It returns both the transformed features and a reusable transformation function for new data.
The reusable transformation function is calibrated on the training set only and is later used to similarly encode test data.
from sklearn.preprocessing import StandardScaler, RobustScaler, OneHotEncoder, FunctionTransformer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator, TransformerMixin
import numpy as np
import pandas as pd
from scipy import stats
def calibrate_feature_selection_and_scaling(df, scaling_method='adaptive', winsorize_percentile=0.95):
"""
Advanced preprocessing pipeline with adaptive scaling based on feature distributions.
Returns both the preprocessing function and the initial transformation.
"""
from sklearn.preprocessing import StandardScaler, RobustScaler, OneHotEncoder, FunctionTransformer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.base import BaseEstimator, TransformerMixin
import numpy as np
import pandas as pd
from scipy import stats
def winsorize_high(X, percentile=0.95):
"""Winsorize values above the given percentile."""
if X.ndim == 1:
X = X.reshape(-1, 1)
cap = np.percentile(X, percentile * 100, axis=0)
return np.where(X > cap, cap, X)
def analyze_distribution(X):
"""Analyze feature distribution characteristics."""
if X.ndim == 1:
X = X.reshape(-1, 1)
skewness = pd.Series(X.ravel()).skew()
kurtosis = pd.Series(X.ravel()).kurtosis()
return skewness, kurtosis
class SingleColumnTransformer(BaseEstimator, TransformerMixin):
"""Custom transformer that ensures proper handling of single columns."""
def __init__(self, func, **kwargs):
self.func = func
self.kwargs = kwargs
def fit(self, X, y=None):
return self
def transform(self, X):
if X.ndim == 1:
X = X.reshape(-1, 1)
if isinstance(X, pd.DataFrame):
X = X.values
result = np.zeros_like(X, dtype=float)
for i in range(X.shape[1]):
col_data = X[:, i].reshape(-1)
try:
if self.func == 'yeojohnson':
result[:, i] = stats.yeojohnson(col_data)[0]
elif self.func == 'boxcox':
result[:, i] = stats.boxcox(col_data + 1e-10)[0]
elif self.func == 'log':
result[:, i] = np.log1p(col_data)
elif self.func == 'sigmoid':
result[:, i] = 1 / (1 + np.exp(-col_data))
except:
# Fallback to simpler transformation
result[:, i] = np.log1p(col_data) if np.all(col_data >= 0) else col_data
return result
exclude_cols = {'food_id', 'food_name', 'food_group', "source_type", "cluster", "embedding"}
embedding_cols = [col for col in df.columns if col.startswith('embedding_')]
categorical_cols = [col for col in ['cluster'] if col in df.columns]
numeric_cols = [col for col in df.columns
if col not in exclude_cols.union(embedding_cols, categorical_cols, {'Iron, Fe'})]
y = np.log1p(df['Iron, Fe']) if 'Iron, Fe' in df.columns else None
transformers = []
if scaling_method == 'adaptive':
if embedding_cols:
transformers.append(('embeddings_passthrough', 'passthrough', embedding_cols))
if numeric_cols:
numeric_data = df[numeric_cols]
# Group features by their characteristics
extreme_skew = []
moderate_skew = []
heavy_tail = []
normal = []
for col in numeric_cols:
skewness, kurt = analyze_distribution(numeric_data[col].values)
abs_skew = abs(skewness)
if abs_skew > 5:
extreme_skew.append(col)
elif abs_skew > 2:
moderate_skew.append(col)
elif kurt > 10:
heavy_tail.append(col)
else:
normal.append(col)
# Add transformers for each group
if extreme_skew:
transformers.append((
'extreme_skew',
Pipeline([
('winsorize', FunctionTransformer(winsorize_high)),
('transform', SingleColumnTransformer('yeojohnson')),
('scale', RobustScaler(quantile_range=(5, 95)))
]),
extreme_skew
))
if moderate_skew:
transformers.append((
'moderate_skew',
Pipeline([
('winsorize', FunctionTransformer(winsorize_high)),
('transform', SingleColumnTransformer('log')),
('scale', RobustScaler())
]),
moderate_skew
))
if heavy_tail:
transformers.append((
'heavy_tail',
Pipeline([
('winsorize', FunctionTransformer(winsorize_high)),
('scale', RobustScaler(quantile_range=(10, 90)))
]),
heavy_tail
))
if normal:
transformers.append((
'normal',
Pipeline([
('winsorize', FunctionTransformer(winsorize_high)),
('scale', StandardScaler())
]),
normal
))
else:
numeric_cols = embedding_cols + numeric_cols
if scaling_method == 'robust':
scaler = RobustScaler()
elif scaling_method == 'standard':
scaler = StandardScaler()
elif scaling_method == 'log':
scaler = Pipeline([
('transform', SingleColumnTransformer('log')),
('scale', StandardScaler())
])
transformers.append((
f'{scaling_method}_scaler',
Pipeline([
('winsorize', FunctionTransformer(winsorize_high)),
('scale', scaler)
]),
numeric_cols
))
if categorical_cols:
transformers.append(
('categorical',
OneHotEncoder(drop='first', sparse_output=False, handle_unknown='ignore'),
categorical_cols)
)
preprocessor = ColumnTransformer(transformers=transformers, remainder='drop', sparse_threshold=0)
X_scaled = preprocessor.fit_transform(df)
# First pass to get encoder feature names
if categorical_cols:
encoder = OneHotEncoder(drop='first', sparse_output=False, handle_unknown='ignore')
encoder.fit(df[categorical_cols])
cat_feature_names = list(encoder.get_feature_names_out(categorical_cols))
else:
cat_feature_names = []
# Combine all feature names in the correct order
feature_names = []
for name, _, cols in transformers:
if name == 'categorical':
feature_names.extend(cat_feature_names)
else:
feature_names.extend(cols)
# Initial transformation
X_initial = X_scaled
y_initial = y
def reapply_transformation(new_df):
"""
Applies fitted transformation to new data.
Args:
new_df (pd.DataFrame): New data to transform
Returns:
tuple: (X_transformed, y_transformed)
"""
# Check if all required original columns are present
required_cols = set(numeric_cols + categorical_cols)
missing_cols = required_cols - set(new_df.columns)
if missing_cols:
raise ValueError(f"Missing required columns: {missing_cols}")
X_new_scaled = preprocessor.transform(new_df)
y_new = np.log1p(new_df['Iron, Fe']) if 'Iron, Fe' in new_df.columns else None
return X_new_scaled, y_new
return X_initial, y_initial, feature_names, reapply_transformation
_, _, feature_names, apply_feature_selection_and_scaling = calibrate_feature_selection_and_scaling(imputed_food_rows)
print(f"\nNumber of features: {len(feature_names)}")
print(f"\nFeatures: {feature_names}")
print(np.sort(feature_names))
Number of features: 27
Features: ['Ash', 'Calcium, Ca', 'Copper, Cu', 'Linoleic fatty acid', 'Magnesium, Mg', 'Niacin', 'Palmitoleic fatty acid', 'Phosphorus, P', 'Potassium, K', 'Riboflavin', 'Sodium, Na', 'Thiamin', 'Zinc, Zn', 'Oleic fatty acid', 'Protein', 'Water', 'embed_0', 'embed_1', 'embed_2', 'embed_3', 'embed_4', 'embed_5', 'embed_6', 'embed_7', 'cluster_Dairy and Egg Products', 'cluster_Dairy and Egg Products_1', 'cluster_nan']
['Ash' 'Calcium, Ca' 'Copper, Cu' 'Linoleic fatty acid' 'Magnesium, Mg'
'Niacin' 'Oleic fatty acid' 'Palmitoleic fatty acid' 'Phosphorus, P'
'Potassium, K' 'Protein' 'Riboflavin' 'Sodium, Na' 'Thiamin' 'Water'
'Zinc, Zn' 'cluster_Dairy and Egg Products'
'cluster_Dairy and Egg Products_1' 'cluster_nan' 'embed_0' 'embed_1'
'embed_2' 'embed_3' 'embed_4' 'embed_5' 'embed_6' 'embed_7']
Verify Feature Vectors#
Multicollinearity Check:
Computes correlation matrix
Identifies highly correlated feature pairs (>0.8)
Important for Linear Regression and Elastic Net
Matrix Conditioning:
Calculates condition number to check for numerical stability
High condition numbers (>1000) indicate potential problems
Distribution Analysis:
Computes skewness and kurtosis for each feature
Performs normality tests
Important for all models, especially Linear Regression
Visualizations:
Correlation heatmap
Distribution plots for features
PCA explained variance ratio
Q-Q plots for normality assessment
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.preprocessing import PowerTransformer
from sklearn.decomposition import PCA
from scipy.stats import kurtosis, skew
import warnings
warnings.filterwarnings('ignore')
def verify_feature_vectors(df):
"""
Verify feature vectors for suitability with various ML models and generate diagnostics.
Returns the processed X and y along with a dictionary of diagnostic metrics.
"""
X, y = apply_feature_selection_and_scaling(df)
# Initialize diagnostics dictionary
diagnostics = {}
# 1. Check for multicollinearity
correlation_matrix = pd.DataFrame(X).corr()
high_correlation_pairs = get_high_correlation_pairs(correlation_matrix)
diagnostics['high_correlation_pairs'] = high_correlation_pairs
# 2. Check conditioning of the feature matrix
condition_number = np.linalg.cond(X)
diagnostics['condition_number'] = condition_number
# 3. Analyze feature distributions
distribution_stats = analyze_distributions(X)
diagnostics['distribution_stats'] = distribution_stats
# 4. Generate visualizations
generate_diagnostic_plots(X, correlation_matrix, distribution_stats)
return X, y, diagnostics
def get_high_correlation_pairs(correlation_matrix, threshold=0.8):
"""Find highly correlated feature pairs."""
high_corr_pairs = []
for i in range(len(correlation_matrix.columns)):
for j in range(i+1, len(correlation_matrix.columns)):
if abs(correlation_matrix.iloc[i,j]) > threshold:
high_corr_pairs.append({
'feature1': correlation_matrix.columns[i],
'feature2': correlation_matrix.columns[j],
'correlation': correlation_matrix.iloc[i,j]
})
return high_corr_pairs
def analyze_distributions(X):
"""Analyze the statistical properties of feature distributions."""
stats_dict = {}
X_df = pd.DataFrame(X)
for col in X_df.columns:
stats_dict[col] = {
'skewness': skew(X_df[col]),
'kurtosis': kurtosis(X_df[col]),
'normality_test': stats.normaltest(X_df[col])[1]
}
return stats_dict
def generate_diagnostic_plots(X, correlation_matrix, distribution_stats):
"""Generate comprehensive diagnostic visualizations."""
X_df = pd.DataFrame(X)
plot_scale = 1
# 1a. Correlation Heatmap
plt.figure(figsize=(plot_scale*10, plot_scale*8))
sns.heatmap(correlation_matrix, annot=False, cmap='coolwarm', center=0)
plt.title('Feature Correlation Heatmap')
plt.tight_layout()
plt.show()
# 1b. Correlation Heatmap without One-hot Encoded Features
plt.figure(figsize=(plot_scale*10, plot_scale*8))
sns.heatmap(correlation_matrix.iloc[:10, :10], annot=False, cmap='coolwarm', center=0)
plt.title('Correlations without One-hot Encoded Features)')
plt.tight_layout()
plt.show()
# 2. Distribution Plots
plt.figure(figsize=(plot_scale*10, plot_scale*8))
for column in X_df.columns[:10]: # Plot first N features for clarity
sns.kdeplot(data=X_df[column], label=f'Feature {column}')
plt.title('Feature Distributions')
plt.legend()
plt.tight_layout()
plt.show()
# 3. PCA Explained Variance
plt.figure(figsize=(plot_scale*10, plot_scale*8))
pca = PCA()
pca.fit(X)
plt.plot(range(1, len(pca.explained_variance_ratio_) + 1),
np.cumsum(pca.explained_variance_ratio_), 'bo-')
plt.xlabel('Number of Components')
plt.ylabel('Cumulative Explained Variance Ratio')
plt.title('PCA Explained Variance')
plt.tight_layout()
plt.show()
# 4. QQ Plots for Select Features
plt.figure(figsize=(plot_scale*10, plot_scale*8))
for i, column in enumerate(X_df.columns[:3]): # Plot first 3 features
stats.probplot(X_df[column], dist="norm", plot=plt)
plt.title('Q-Q Plots for Selected Features')
plt.tight_layout()
plt.show()
def print_diagnostics_summary(diagnostics):
"""Print a summary of the diagnostic results."""
print("\nFeature Vector Diagnostics Summary:")
print("-" * 50)
# Condition number interpretation
print(f"\nMatrix Condition Number: {diagnostics['condition_number']:.2f}")
if diagnostics['condition_number'] > 1000:
print("WARNING: High condition number indicates potential numerical instability")
# Correlation analysis
print("\nHighly Correlated Feature Pairs:")
if diagnostics['high_correlation_pairs']:
for pair in diagnostics['high_correlation_pairs']:
print(f"- {pair['feature1']} & {pair['feature2']}: {pair['correlation']:.3f}")
else:
print("No highly correlated features found")
# Distribution analysis
print("\nDistribution Analysis:")
for feature, stats in diagnostics['distribution_stats'].items():
print(f"\nFeature {feature}:")
print(f"- Skewness: {stats['skewness']:.3f}")
print(f"- Kurtosis: {stats['kurtosis']:.3f}")
print(f"- Normality test p-value: {stats['normality_test']:.3e}")
# Get the processed features and diagnostics
X, y, diagnostics = verify_feature_vectors(imputed_food_rows)
# Print the diagnostic summary
print_diagnostics_summary(diagnostics)





Feature Vector Diagnostics Summary:
--------------------------------------------------
Matrix Condition Number: 18.84
Highly Correlated Feature Pairs:
- 12 & 14: 0.816
Distribution Analysis:
Feature 0:
- Skewness: 0.015
- Kurtosis: -0.400
- Normality test p-value: 2.111e-18
Feature 1:
- Skewness: -0.001
- Kurtosis: -0.321
- Normality test p-value: 6.604e-11
Feature 2:
- Skewness: 0.278
- Kurtosis: -0.595
- Normality test p-value: 3.640e-74
Feature 3:
- Skewness: 0.373
- Kurtosis: -1.197
- Normality test p-value: 0.000e+00
Feature 4:
- Skewness: 0.041
- Kurtosis: 0.663
- Normality test p-value: 3.815e-19
Feature 5:
- Skewness: -0.012
- Kurtosis: -1.421
- Normality test p-value: 0.000e+00
Feature 6:
- Skewness: 0.542
- Kurtosis: -1.257
- Normality test p-value: 0.000e+00
Feature 7:
- Skewness: -0.253
- Kurtosis: -0.802
- Normality test p-value: 5.713e-161
Feature 8:
- Skewness: -0.131
- Kurtosis: -0.163
- Normality test p-value: 1.231e-07
Feature 9:
- Skewness: 0.125
- Kurtosis: -0.871
- Normality test p-value: 6.905e-204
Feature 10:
- Skewness: -0.072
- Kurtosis: -0.848
- Normality test p-value: 7.038e-180
Feature 11:
- Skewness: 0.427
- Kurtosis: -0.978
- Normality test p-value: 0.000e+00
Feature 12:
- Skewness: 0.134
- Kurtosis: -1.277
- Normality test p-value: 0.000e+00
Feature 13:
- Skewness: 0.393
- Kurtosis: -1.104
- Normality test p-value: 0.000e+00
Feature 14:
- Skewness: 0.429
- Kurtosis: -1.264
- Normality test p-value: 0.000e+00
Feature 15:
- Skewness: -0.696
- Kurtosis: -0.796
- Normality test p-value: 4.225e-251
Feature 16:
- Skewness: 0.111
- Kurtosis: -0.565
- Normality test p-value: 2.368e-49
Feature 17:
- Skewness: 0.130
- Kurtosis: -0.829
- Normality test p-value: 1.371e-167
Feature 18:
- Skewness: -0.128
- Kurtosis: -0.544
- Normality test p-value: 1.169e-45
Feature 19:
- Skewness: -0.127
- Kurtosis: -0.428
- Normality test p-value: 3.456e-26
Feature 20:
- Skewness: -0.135
- Kurtosis: -0.632
- Normality test p-value: 3.985e-69
Feature 21:
- Skewness: -0.022
- Kurtosis: -0.686
- Normality test p-value: 2.748e-83
Feature 22:
- Skewness: -0.170
- Kurtosis: -1.130
- Normality test p-value: 0.000e+00
Feature 23:
- Skewness: -0.178
- Kurtosis: -0.642
- Normality test p-value: 5.200e-76
Feature 24:
- Skewness: 6.296
- Kurtosis: 37.637
- Normality test p-value: 0.000e+00
Feature 25:
- Skewness: 9.543
- Kurtosis: 89.072
- Normality test p-value: 0.000e+00
Feature 26:
- Skewness: -4.939
- Kurtosis: 22.389
- Normality test p-value: 0.000e+00
len(feature_names)
27
Top heatmap (all features):
Strong correlations in upper-left cluster (features 0-14)
Weaker/negative correlations in bottom section (features 15+)
Distinct blocks suggest feature groupings
Bottom heatmap (non-one-hot features):
Moderate positive correlations throughout (0.2-0.6)
No strong negative correlations
Less pronounced clustering than full feature set
PCA Explained Variance plot shows:
~90% variance captured by first 10 components
Sharp increase up to 5-6 components
Diminishing returns after 10 components
Q-Q plot indicates:
Non-normal distribution of features
Heavy tails at both ends
Plateaus at -0.5 and 0.5 suggesting discrete/binary features
Central region (~-1 to 1) follows normal distribution
The feature diagnostics indicate stable feature characteristics:
Moderate condition number (18.84) suggests acceptable numerical stability
Limited multicollinearity (only one highly correlated pair)
Features 0 and 1 show near-symmetric distributions (skewness ≈ 0) but are non-normal (very low p-values)
Negative kurtosis values indicate lighter tails than normal distribution
STRATIFIED SAMPLING#
Split and Prepare Data#
TEST_SIZE = 0.2
STRATIFICATION_BINS = 4
STRATIFIED_CV_FOLDS = 5
from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score
def create_stratification_bins(df, n_bins=STRATIFICATION_BINS):
iron_values = np.log1p(df['Iron, Fe']) # TODO apply log1p() ?
# iron_values = df['Iron, Fe'] # TODO apply log1p() ?
# Use quantile-based binning to ensure roughly equal numbers in each bin
bins = pd.qcut(iron_values, q=n_bins, labels=False)
return bins
def create_food_group_bins(df):
food_group_values = df['food_group']
bins = pd.Categorical(food_group_values).codes
return bins
def create_cv_folds(X, y, strat_labels, n_splits=STRATIFIED_CV_FOLDS, random_state=RANDOM_SEED):
# Use StratifiedKFold instead of KFold
skf = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=random_state)
return skf
def split_and_prepare_data(df, test_size=TEST_SIZE, random_state=RANDOM_SEED):
# Create bins for stratification using iron content values
strat_labels = create_stratification_bins(df)
X, y = apply_feature_selection_and_scaling(df)
# Split the data
X_train, X_test, y_train, y_test, strat_train, strat_test = train_test_split(
X, y, strat_labels,
test_size=test_size,
random_state=random_state,
stratify=strat_labels
)
# Create stratified cross-validation folds using the training set's stratification labels
kfold = create_cv_folds(X_train, y_train, strat_train)
return X_train, X_test, y_train, y_test, kfold, strat_train
# split data
df2 = imputed_food_rows[imputed_food_rows['source_type'].isin(['1'])]
X_train, X_test, y_train, y_test, kfold, strat_train = split_and_prepare_data(df2)
print(f"Training set shape: {X_train.shape}")
print(f"Test set shape: {X_test.shape}")
Training set shape: (4440, 27)
Test set shape: (1111, 27)
Verify Stratification for Cross Validation#
def verify_stratification_for_cross_validation():
# Get stratification labels for training set
strat_labels_train = strat_train
# Convert y_train to numpy array if it's a pandas Series
y_train_array = y_train.to_numpy() if isinstance(y_train, pd.Series) else y_train
# Create a DataFrame to store all the data points with their fold labels
data_points = []
data_labels = []
# Add full training set
data_points.extend(y_train_array)
data_labels.extend(['Full Set'] * len(y_train_array))
# Add each fold
for i, (train_idx, val_idx) in enumerate(kfold.split(X_train, strat_labels_train), 1):
y_val_fold = y_train_array[val_idx]
data_points.extend(y_val_fold)
data_labels.extend([f'Fold {i}'] * len(y_val_fold))
plot_df = pd.DataFrame({
'Iron Content': data_points,
'Dataset': data_labels
})
plt.figure(figsize=(20, 6))
sns.stripplot(data=plot_df,
x='Dataset',
y='Iron Content',
color='darkred',
alpha=0.2,
size=3,
jitter=0.3)
plt.title('Cross Validation Stratified Split Verification')
plt.xlabel('Dataset Segment')
plt.ylabel('Iron ( log1p(mg) per 100g Food Portion )')
# plt.yscale('log')
plt.tight_layout()
plt.show()
verify_stratification_for_cross_validation()

The cross-validation split visualization shows consistent distribution of iron content across all 5 folds, with:
Similar density patterns in 0-2 “units” / 100g range
Proportional representation of high-iron outliers (3-5 “units” /100g)
Each fold capturing the full range of values
No visible systematic bias between folds
This confirms effective stratification in the CV splits, supporting reliable model evaluation.
The “units” here are log1p(mg)/100g due to last minute rush.
Verify Stratification for Training Test Split#
def verify_stratification_for_training_test_split():
# Small constant to handle zeros
eps = 1e-2
y_train2 =np.expm1(y_train)
y_test2 =np.expm1(y_test)
# Calculate log-spaced bins, adding epsilon to handle zeros
min_value = max(min(min(y_train2), min(y_test)), eps) # Ensure minimum is positive
max_value = max(max(y_train2), max(y_test))
bins = np.logspace(np.log10(min_value), np.log10(max_value), 17) # 31 points for 30 bins
min_val = min(min(y_train2[y_train2 > 0]), min(y_test2[y_test2 > 0]))
max_val = max(max(y_train2[y_train2 > 0]), max(y_test2[y_test2 > 0]))
min_freq = 1 # Since we're using log scale, start at 1
max_freq =10000
# Training set distribution
plt.figure(figsize=(20, 5))
plt.hist(y_train2[y_train2 > 0], bins=bins, color='skyblue', edgecolor='black')
plt.title('Training Dataet Histogram: Iron Content')
plt.xlabel('Iron (mg per 100g of Food)')
plt.ylabel('Frequency')
plt.xscale('log')
plt.yscale('log')
plt.grid(True, which="both", ls="-", alpha=0.2)
plt.xlim(min_val, max_val)
plt.ylim(min_freq, max_freq)
plt.show()
# Test set distribution
plt.figure(figsize=(20, 5))
plt.hist(y_test2[y_test2 > 0], bins=bins, color='skyblue', edgecolor='black')
plt.title('Test Dataset Histogram: Iron Content Distribution')
plt.xlabel('Iron (mg per 100g of Food)')
plt.ylabel('Frequency')
plt.xscale('log')
plt.yscale('log')
plt.grid(True, which="both", ls="-", alpha=0.2)
plt.xlim(min_val, max_val)
plt.ylim(min_freq, max_freq)
plt.show()
verify_stratification_for_training_test_split()


The histograms show very similar log-normal distributions of iron content between train and test sets, with:
Peak frequency around 1-2 mg/100g
Long right tail extending to 100 mg/100g
Consistent shape and proportions across datasets
Log scale reveals good representation across orders of magnitude
This distribution similarity supports valid model evaluation and indicates effective stratified splitting of the data.
Verify Food Groups for Training Test Split#
def compare_food_group_splits(df, train_indices, test_indices):
"""
Compare the distribution of food groups between training and test sets.
Parameters:
df: Original dataframe containing food group information
train_indices: Indices for training set
test_indices: Indices for test set
Returns:
DataFrame with food group counts and percentages for both sets
"""
# Get food groups for train and test sets
train_groups = df.iloc[train_indices]['food_group']
test_groups = df.iloc[test_indices]['food_group']
# Count food groups in each set
train_counts = train_groups.value_counts()
test_counts = test_groups.value_counts()
# Calculate percentages
train_percentages = (train_counts / len(train_groups) * 100).round(1)
test_percentages = (test_counts / len(test_groups) * 100).round(1)
# Create comparison DataFrame
comparison_df = pd.DataFrame({
'Train Count': train_counts,
'Train %': train_percentages,
'Test Count': test_counts,
'Test %': test_percentages
})
# Fill NaN values with 0 for groups that might be missing in either set
comparison_df = comparison_df.fillna(0)
# Sort by total count (train + test)
comparison_df['Total'] = comparison_df['Train Count'] + comparison_df['Test Count']
comparison_df = comparison_df.sort_values('Total', ascending=False)
comparison_df = comparison_df.drop('Total', axis=1)
# Add total row
total_row = pd.DataFrame({
'Train Count': [len(train_groups)],
'Train %': [100.0],
'Test Count': [len(test_groups)],
'Test %': [100.0]
}, index=['TOTAL'])
comparison_df = pd.concat([comparison_df, total_row])
return comparison_df
# Get indices from your train/test split
train_indices = y_train.index
test_indices = y_test.index
# Compare distributions
comparison = compare_food_group_splits(imputed_food_rows, train_indices, test_indices)
# Display results
print("\nFood Group Distribution in Train vs Test Sets:")
print(comparison.to_string())
Food Group Distribution in Train vs Test Sets:
Train Count Train % Test Count Test %
Vegetables and Vegetable Products 592 13.3 144 13.0
Beef Products 466 10.5 104 9.4
Lamb, Veal, and Game Products 322 7.3 95 8.6
Baked Products 311 7.0 67 6.0
Poultry Products 302 6.8 55 5.0
Fruits and Fruit Juices 263 5.9 62 5.6
Pork Products 238 5.4 70 6.3
Fast Foods 227 5.1 46 4.1
Finfish and Shellfish Products 167 3.8 53 4.8
Dairy and Egg Products 169 3.8 35 3.2
Legumes and Legume Products 157 3.5 40 3.6
Soups, Sauces, and Gravies 159 3.6 36 3.2
Beverages 138 3.1 44 4.0
Cereal Grains and Pasta 141 3.2 34 3.1
Baby Foods 108 2.4 27 2.4
Nut and Seed Products 99 2.2 26 2.3
Sausages and Luncheon Meats 93 2.1 25 2.3
Sweets 89 2.0 27 2.4
Restaurant Foods 89 2.0 20 1.8
Snacks 77 1.7 25 2.3
American Indian/Alaska Native Foods 66 1.5 20 1.8
Fats and Oils 48 1.1 18 1.6
Meals, Entrees, and Side Dishes 49 1.1 16 1.4
Spices and Herbs 42 0.9 13 1.2
Breakfast Cereals 28 0.6 9 0.8
TOTAL 4440 100.0 1111 100.0
The train/test split maintains balanced representation across food groups, with similar percentages in both sets. Largest categories (Vegetables, Beef, Lamb/Veal/Game) show proportional distribution between train (~13.3%, 10.5%, 7.3%) and test (~13%, 9.4%, 8.6%). Minor categories (Breakfast Cereals, Spices/Herbs) also maintain consistent ratios. This balanced split supports reliable model evaluation across food categories.
MODEL FITTING#
To predict iron content in foods, I implemented a progressive modeling approach, starting with simple linear methods and advancing to more complex algorithms. Each model addresses specific challenges in the nutritional data, including non-linear relationships and skewed distributions:
Linear Regression:
Established a baseline but struggled due to weak linear correlations between iron and other nutrients (e.g., strongest correlation: 0.49 with Magnesium).Elastic Net:
Combined L1 and L2 regularization to address multicollinearity and perform feature selection by shrinking unimportant coefficients to zero.Random Forest:
Captured non-linear relationships and handled the skewed iron distribution effectively. Its resistance to outliers (e.g., extreme iron in dried thyme) and feature importance insights added value.XGBoost:
Leveraged gradient boosting to model complex nutrient interactions and iteratively improve predictions, excelling with non-linear and skewed data.Neural Network:
Used multiple hidden layers and dropout to learn intricate nutrient relationships while avoiding overfitting. This flexibility suited the dataset’s complexity, despite its smaller size.
# use_best_params=True
use_best_params=False
n_iter=100
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform, randint
from sklearn.experimental import enable_halving_search_cv
from sklearn.model_selection import HalvingRandomSearchCV
# Initialize with proper data types
model_results_df = pd.DataFrame(columns=['model_name', 'rmse_mean', 'rmse_std']).astype({
'model_name': 'str',
'rmse_mean': 'float64',
'rmse_std': 'float64'
})
def add_model_results(results_df, rmse_mean, rmse_std, model_name):
# Check if model already exists
if model_name in results_df['model_name'].values:
# Update existing row
results_df.loc[results_df['model_name'] == model_name, 'rmse_mean'] = rmse_mean
results_df.loc[results_df['model_name'] == model_name, 'rmse_std'] = rmse_std
return results_df
else:
# Add new row
new_row = pd.DataFrame({
'model_name': [model_name],
'rmse_mean': [rmse_mean],
'rmse_std': [rmse_std]
})
return pd.concat([results_df, new_row], ignore_index=True)
Linear Regression#
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
def train_linear_regression():
global model_results_df
model = LinearRegression()
# Using kfold object to split based on food groups
cv_scores = cross_val_score(
model,
X_train,
# np.log1p(y_train),
y_train, # already log1p() scaled
cv=kfold.split(X_train, strat_train), # Pass the .split() with groups
scoring='neg_root_mean_squared_error'
# scoring=custom_rmse_scorer
)
print(f"\nCross-validation RMSE scores: {-cv_scores}")
print(f"Mean CV RMSE score: {-cv_scores.mean():.3f} (std +/- {cv_scores.std():.3f})")
model_results_df = add_model_results(model_results_df, -cv_scores.mean(), cv_scores.std(), 'Linear Regression')
train_linear_regression()
Cross-validation RMSE scores: [0.37906097 0.36919303 0.34800875 0.35777179 0.37036437]
Mean CV RMSE score: 0.365 (std +/- 0.011)
Linear regression performs similarly to ElasticNet (RMSE 0.365 vs 0.3649), but significantly worse than tree-based models (below).
The consistent cross-validation scores (std=0.011) suggest stable but limited performance, indicating strong non-linear relationships in the data that linear models can’t capture effectively.
Support Vector Regression#
from sklearn.svm import SVR
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
def svr_fit():
param_grid = {
'svr__C': [0.1, 1.0, 10.0, 100.0],
'svr__epsilon': [0.01, 0.1, 0.2],
'svr__kernel': ['rbf', 'linear'],
'svr__gamma': ['scale', 'auto', 0.1, 0.01],
# 'svr__tol': [1e-4],
'svr__max_iter': [100000]
}
if use_best_params:
param_grid = {
'svr__C': [10.0],
'svr__epsilon': [0.1],
'svr__kernel': ['rbf'],
'svr__gamma': ['scale'],
'svr__tol': [1e-4],
'svr__max_iter': [100000]
}
pipeline = Pipeline([
# ('scaler', MinMaxScaler()),
# ('scaler', StandardScaler()),
('svr', SVR(cache_size=1000, verbose=False))
])
param_search = RandomizedSearchCV(
estimator=pipeline,
param_distributions=param_grid,
n_iter=n_iter,
cv=kfold.split(X_train, strat_train),
# scoring=rmse_original_scale, # Use custom scorer because y_train is log1p() scaled
scoring='neg_root_mean_squared_error',
n_jobs=-1,
verbose=1,
random_state=RANDOM_SEED
)
param_search.fit(X_train, y_train)
# Calculate RMSE in original scale for all CV results
cv_results = pd.DataFrame({
'C': [params['svr__C'] for params in param_search.cv_results_['params']],
'epsilon': [params['svr__epsilon'] for params in param_search.cv_results_['params']],
'kernel': [params['svr__kernel'] for params in param_search.cv_results_['params']],
'gamma': [params['svr__gamma'] for params in param_search.cv_results_['params']],
'rmse': -param_search.cv_results_['mean_test_score'], # Already in original scale
'std': param_search.cv_results_['std_test_score']
})
cv_results = cv_results.sort_values('rmse').reset_index(drop=True)
print("\nBest parameters found:")
best_params = {k.replace('svr__', ''): v for k, v in param_search.best_params_.items()}
for param, value in best_params.items():
print(f"{param}: {value}")
print(f"\nBest CV RMSE: {-param_search.best_score_:.4f}")
print("\nAll parameter combinations sorted by RMSE:")
pd.set_option('display.float_format', lambda x: '%.4f' % x)
print(cv_results.head(10).to_string(index=False))
# Create model with best parameters
best_pipeline = Pipeline([
# ('scaler', MinMaxScaler()),
# ('scaler', StandardScaler()),
('svr', SVR(**best_params))
])
best_pipeline.fit(X_train, y_train)
global model_results_df
model_results_df = add_model_results(model_results_df, -param_search.best_score_, cv_results['std'][0], 'SVR')
return best_pipeline, param_search
svr_fit()
Fitting 5 folds for each of 96 candidates, totalling 480 fits
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
/root/fnana/fnana_venv/lib/python3.10/site-packages/sklearn/svm/_base.py:297: ConvergenceWarning: Solver terminated early (max_iter=100000). Consider pre-processing your data with StandardScaler or MinMaxScaler.
warnings.warn(
Best parameters found:
max_iter: 100000
kernel: rbf
gamma: auto
epsilon: 0.1
C: 10.0
Best CV RMSE: 0.3009
All parameter combinations sorted by RMSE:
C epsilon kernel gamma rmse std
10.0000 0.1000 rbf auto 0.3009 0.0101
10.0000 0.0100 rbf scale 0.3010 0.0125
10.0000 0.1000 rbf scale 0.3011 0.0137
10.0000 0.0100 rbf auto 0.3015 0.0114
10.0000 0.2000 rbf auto 0.3034 0.0108
10.0000 0.0100 rbf 0.1000 0.3035 0.0127
10.0000 0.1000 rbf 0.1000 0.3048 0.0142
100.0000 0.1000 rbf 0.0100 0.3058 0.0104
100.0000 0.2000 rbf 0.0100 0.3076 0.0088
100.0000 0.0100 rbf 0.0100 0.3077 0.0100
(Pipeline(steps=[('svr', SVR(C=10.0, gamma='auto', max_iter=100000))]),
RandomizedSearchCV(cv=<generator object _BaseKFold.split at 0x7ab45ca32180>,
estimator=Pipeline(steps=[('svr', SVR(cache_size=1000))]),
n_iter=100, n_jobs=-1,
param_distributions={'svr__C': [0.1, 1.0, 10.0, 100.0],
'svr__epsilon': [0.01, 0.1, 0.2],
'svr__gamma': ['scale', 'auto', 0.1,
0.01],
'svr__kernel': ['rbf', 'linear'],
'svr__max_iter': [100000]},
random_state=42, scoring='neg_root_mean_squared_error',
verbose=1))
The SVR shows convergence issues despite high max_iter (100,000), indicating scaling problems with the input data. Performance falls between tree-based models and ElasticNet (RMSE 0.3009).
Key points:
RBF kernel outperforms linear
Moderate C value (10.0) suggests balanced regularization
Consistent performance across parameters (std ~0.01)
Auto/scale gamma performed similarly
The convergence warnings and moderate performance suggest feature scaling is needed for better results.
Elastic Net Regression#
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV, HalvingRandomSearchCV
from scipy.stats import uniform, randint
from sklearn.linear_model import ElasticNet
import numpy as np
from sklearn.metrics import make_scorer
def enet_fit():
param_grid = {
# balanced
# 'alpha': [0.00001, 0.0001, 0.001, 0.01, 0.1],
# 'l1_ratio': [0.1, 0.3, 0.5, 0.7, 0.9],
# 'max_iter': [3000],
# 'tol': [1e-4],
# 'warm_start': [True],
# 'selection': ['random']
# # quick
# 'alpha': [0.0001, 0.001, 0.01],
# 'l1_ratio': [0.1, 0.5, 0.9],
# 'max_iter': [2000],
# 'tol': [1e-4],
# 'warm_start': [True],
# 'selection': ['random']
# extensive
'alpha': [0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1, 1.0],
'l1_ratio': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
'max_iter': [5000],
'tol': [1e-5],
'warm_start': [True],
'selection': ['random', 'cyclic']
}
if use_best_params:
param_grid = {
'alpha': [0.0001], # regularization strength
'l1_ratio': [0.25], # mix between L1 and L2
'max_iter': [5000], # increased maximum iterations
'tol': [1e-5], # decreased tolerance
'warm_start': [True], # enable warm start
'selection': ['random'], # try different selection strategies
}
elastic_net = ElasticNet(
random_state=RANDOM_SEED,
fit_intercept=True,
)
# Using RandomizedSearchCV with custom scorer
param_search = RandomizedSearchCV(
estimator=elastic_net,
param_distributions=param_grid,
n_iter=n_iter, # Number of parameter settings sampled
cv=kfold.split(X_train, strat_train),
# scoring=rmse_scorer, # Using our custom scorer
scoring='neg_root_mean_squared_error',
n_jobs=-1,
verbose=1,
random_state=RANDOM_SEED
)
param_search.fit(X_train, y_train) ## y_train is log1p scaled
# Create results dataframe avoiding the SettingWithCopyWarning
cv_results = pd.DataFrame({
'alpha': [params['alpha'] for params in param_search.cv_results_['params']],
'l1_ratio': [params['l1_ratio'] for params in param_search.cv_results_['params']],
'selection': [params['selection'] for params in param_search.cv_results_['params']],
'rmse': -param_search.cv_results_['mean_test_score'],
'std': param_search.cv_results_['std_test_score']
})
# Sort by RMSE (best first)
cv_results = cv_results.sort_values('rmse').reset_index(drop=True)
print("\nBest parameters found:")
for param, value in param_search.best_params_.items():
print(f"{param}: {value}")
print(f"\nBest CV RMSE: {-param_search.best_score_:.4f}")
# Display all results in a clean table format
print("\nAll parameter combinations sorted by RMSE:")
pd.set_option('display.float_format', lambda x: '%.4f' % x)
print(cv_results.head(10).to_string(index=False))
global model_results_df
model_results_df = add_model_results(model_results_df, -param_search.best_score_, cv_results['std'][0], 'Elastic Net')
# Use entire trainset to create a final new model with best parameters
top_model = ElasticNet(**param_search.best_params_, random_state=RANDOM_SEED)
top_model.fit(X_train, y_train) ## TODO anything needed when X_train, y_train are both are log1p() scaled?
# return LogTransformedModel(top_model) # makes unscaled predictions
return top_model # makes log1p() scaled predictions
best_en = enet_fit()
Fitting 5 folds for each of 100 candidates, totalling 500 fits
Best parameters found:
warm_start: True
tol: 1e-05
selection: cyclic
max_iter: 5000
l1_ratio: 0.9
alpha: 0.0001
Best CV RMSE: 0.3649
All parameter combinations sorted by RMSE:
alpha l1_ratio selection rmse std
0.0001 0.9000 cyclic 0.3649 0.0108
0.0001 0.8000 random 0.3649 0.0108
0.0001 0.7000 cyclic 0.3649 0.0108
0.0001 0.7000 random 0.3649 0.0108
0.0001 0.6000 cyclic 0.3649 0.0108
0.0001 0.6000 random 0.3649 0.0108
0.0001 0.5000 cyclic 0.3649 0.0108
0.0001 0.5000 random 0.3649 0.0108
0.0001 0.4000 cyclic 0.3649 0.0108
0.0001 0.4000 random 0.3649 0.0108
The ElasticNet model performs notably worse than both Random Forest and XGBoost (below) with a CV RMSE of 0.3649.
Key observations:
Parameters:
High L1 ratio (0.9) indicates LASSO-like behavior was optimal
Very small alpha (0.0001) suggests minimal regularization needed
Model shows identical RMSE across different l1_ratios (0.4-0.9), indicating low sensitivity to this parameter
Performance:
~37% higher error than Random Forest (0.3649 vs 0.265)
Consistent performance across folds (std=0.0108)
Linear model limitations evident compared to tree-based approaches
The inferior performance suggests non-linear relationships between features and iron content that ElasticNet cannot capture.
Random Forest#
from sklearn.ensemble import RandomForestRegressor
def rforest_fit():
param_grid = {
'n_estimators': [100, 200, 300], # Wider range with fewer points
'max_depth': [10, 20, None], # Include None for unlimited depth
'min_samples_split': [2, 5, 10], # More spread out values
'min_samples_leaf': [1, 2, 4], # Start from 1 leaf
'max_features': ['sqrt', 'log2'], # Most common choices
'bootstrap': [True, False], # Add bootstrapping option
'random_state': [42] # For reproducibility
}
if use_best_params:
param_grid = {
'n_estimators': [110,],
'max_depth': [25],
'min_samples_split': [4,],
'min_samples_leaf': [2,],
'max_features': [None]
}
rf = RandomForestRegressor()
# param_search = GridSearchCV(
# estimator=rf,
# param_grid=param_grid,
# cv=kfold.split(X_train, strat_train),
# scoring='neg_root_mean_squared_error',
# n_jobs=-1,
# verbose=1
# )
param_search = RandomizedSearchCV(
estimator=rf,
param_distributions=param_grid,
n_iter=n_iter, # Number of parameter settings sampled
cv=kfold.split(X_train, strat_train),
scoring='neg_root_mean_squared_error',
n_jobs=-1,
verbose=1,
random_state=RANDOM_SEED,
return_train_score=True
)
# Fit the model
param_search.fit(X_train, y_train)
# Print results
print("\nBest parameters found:")
for param, value in param_search.best_params_.items():
print(f"{param}: {value}")
print(f"\nBest CV RMSE: {-param_search.best_score_:.3f}")
# Create detailed CV results DataFrame
cv_results = create_cv_results_df(param_search)
print("\nTop 10 parameter combinations sorted by RMSE:")
pd.set_option('display.float_format', lambda x: '%.4f' % x)
print(cv_results.head(10).to_string(index=False))
# Train final model with best parameters
best_rf = RandomForestRegressor(**param_search.best_params_)
best_rf.fit(X_train, y_train)
# Visualizations
plot_feature_importance(best_rf, feature_names)
plot_learning_curves(param_search, cv_results)
global model_results_df
model_results_df = add_model_results(model_results_df, -param_search.best_score_, cv_results['std'][0], 'Random Forest')
return best_rf, cv_results
def create_cv_results_df(param_search):
"""Create a DataFrame with cross-validation results"""
cv_results = pd.DataFrame(param_search.cv_results_['params'])
cv_results['rmse'] = -param_search.cv_results_['mean_test_score']
cv_results['std'] = param_search.cv_results_['std_test_score']
cv_results['train_rmse'] = -param_search.cv_results_['mean_train_score']
cv_results['train_std'] = param_search.cv_results_['std_train_score']
return cv_results.sort_values('rmse').reset_index(drop=True)
def plot_feature_importance(model, feature_names, top_n=20):
"""Plot feature importance"""
feature_importance = pd.DataFrame({
'feature': feature_names,
'importance': model.feature_importances_
}).sort_values('importance', ascending=False)
plt.figure(figsize=(12, 5))
sns.barplot(data=feature_importance.head(top_n), x='importance', y='feature')
plt.title(f'Top {top_n} Most Important Features')
plt.xlabel('Feature Importance')
plt.tight_layout()
plt.show()
def plot_learning_curves(param_search, cv_results):
"""Plot learning curves comparing train and test performance"""
plt.figure(figsize=(10, 5))
# Plot distribution of train vs test RMSE
sns.kdeplot(data=cv_results, x='rmse', label='Test RMSE')
sns.kdeplot(data=cv_results, x='train_rmse', label='Train RMSE')
plt.axvline(x=-param_search.best_score_, color='r', linestyle='--',
label='Best Test RMSE')
plt.title('Distribution of Train vs Test RMSE')
plt.xlabel('RMSE')
plt.legend()
plt.tight_layout()
plt.show()
best_rf, cv_results = rforest_fit()
Fitting 5 folds for each of 100 candidates, totalling 500 fits
Best parameters found:
random_state: 42
n_estimators: 300
min_samples_split: 2
min_samples_leaf: 1
max_features: sqrt
max_depth: 20
bootstrap: False
Best CV RMSE: 0.265
Top 10 parameter combinations sorted by RMSE:
random_state n_estimators min_samples_split min_samples_leaf max_features max_depth bootstrap rmse std train_rmse train_std
42 300 2 1 sqrt 20.0000 False 0.2653 0.0048 0.0066 0.0004
42 300 2 1 sqrt NaN False 0.2663 0.0053 0.0000 0.0000
42 300 5 1 sqrt NaN False 0.2669 0.0049 0.0366 0.0005
42 300 2 1 log2 20.0000 False 0.2674 0.0048 0.0078 0.0005
42 100 5 1 sqrt 20.0000 False 0.2681 0.0062 0.0388 0.0010
42 300 5 1 log2 20.0000 False 0.2684 0.0051 0.0416 0.0003
42 200 2 2 sqrt 20.0000 False 0.2688 0.0065 0.0541 0.0007
42 100 5 1 log2 20.0000 False 0.2689 0.0051 0.0425 0.0003
42 300 5 1 log2 NaN False 0.2696 0.0057 0.0400 0.0005
42 100 2 2 sqrt NaN False 0.2699 0.0065 0.0538 0.0009


The Random Forest model performs slightly better than XGBoost (below) with a best CV RMSE of 0.265.
Key observations:
Model Configuration:
Shallow trees (max_depth=20)
No bootstrapping suggests full dataset usage is beneficial
sqrt max_features indicates optimal feature subset size
Stability Analysis:
Very consistent performance (std=0.0048 for best model)
Minimal overfitting (train_rmse=0.0066 vs test_rmse=0.2653)
RMSE distribution shows good separation between train/test, indicating proper model generalization
The distribution plot highlights strong model calibration and robustness:
Test RMSE (blue): Tight normal distribution centered at 0.28–0.29, showing consistent performance across data splits.
Train RMSE (orange): Broader distribution around 0.15–0.20 with minimal overlap with test RMSE, indicating:
A good balance of bias and variance.
No severe overfitting, despite the gap.
Stable learning across training sets.
The red dashed line, marking the best test RMSE, aligns with the peak density of test results, confirming the model reliably identified its optimal configuration.
The similar performance between RF and XGBoost (0.265 vs 0.276) suggests we’re approaching the inherent predictability limit of iron content from these features.
Gradient Boosted Trees#
from xgboost import XGBRegressor
def xgboost_fit():
# Define parameter grid for XGBoost
param_grid = {
#####################
# comprehensive
####################
# Tree-specific parameters
'max_depth': randint(3, 12), # Controls tree depth
'min_child_weight': randint(1, 10), # Min sum of instance weight in child
'gamma': uniform(0, 1), # Min loss reduction for split
'subsample': uniform(0.6, 0.4), # Subsample ratio of training instances
'colsample_bytree': uniform(0.6, 0.4), # Subsample ratio of columns for each tree
'colsample_bylevel': uniform(0.6, 0.4), # Subsample ratio of columns for each level
# Boosting parameters
'n_estimators': randint(100, 1000), # Number of trees
'learning_rate': uniform(0.01, 0.29), # Learning rate
'max_delta_step': randint(0, 10), # Maximum delta step for each tree's weight estimation
# Regularization parameters
'reg_alpha': [0, 0.001, 0.01, 0.1, 1, 10], # L1 regularization
'reg_lambda': [0.01, 0.1, 1, 10, 100], # L2 regularization
# Tree-growing strategy
'grow_policy': ['depthwise', 'lossguide'], # Tree growing policy
'max_leaves': randint(0, 32), # Maximum number of leaves
# Performance parameters
'tree_method': ['hist'], # Tree construction algorithm
'sampling_method': ['uniform'],
#########################
# # focused search space
#########################
# 'max_depth': randint(4, 8),
# 'min_child_weight': randint(1, 5),
# 'gamma': uniform(0.1, 0.4),
# 'subsample': uniform(0.7, 0.3),
# 'colsample_bytree': uniform(0.7, 0.3),
# 'n_estimators': randint(200, 600),
# 'learning_rate': [0.01, 0.05, 0.1, 0.15, 0.2],
# 'reg_alpha': [0.001, 0.01, 0.1],
# 'reg_lambda': [0.1, 1, 10],
########################
# # grid search
########################
# 'max_depth': [3, 5, 7],
# 'min_child_weight': [1, 3, 5],
# 'gamma': [0, 0.1, 0.2],
# 'subsample': [0.6, 0.8, 1.0],
# 'colsample_bytree': [0.6, 0.8, 1.0],
# 'n_estimators': [100, 300, 500],
# 'learning_rate': [0.01, 0.05, 0.1],
# 'reg_alpha': [0.001, 0.01, 0.1],
# 'reg_lambda': [0.1, 1.0, 10.0],
}
if use_best_params:
param_grid = {
'max_depth': [10],
'learning_rate': [0.2],
'n_estimators': [500],
'min_child_weight': [2],
'gamma': [0.3],
'subsample': [0.5] ,
'colsample_bytree': [0.5]
}
# Create base XGBoost model
xgb = XGBRegressor(
random_state=RANDOM_SEED,
)
param_search = RandomizedSearchCV(
estimator=xgb,
param_distributions=param_grid,
n_iter=n_iter, # Number of parameter settings sampled
cv=kfold.split(X_train, strat_train),
# scoring=rmse_scorer, # Use custom scorer
scoring='neg_root_mean_squared_error',
n_jobs=-1,
verbose=1,
random_state=RANDOM_SEED
)
param_search.fit(X_train, y_train)
# Print best parameters and score
print("\nBest parameters found:")
for param, value in param_search.best_params_.items():
print(f"{param}: {value}")
print(f"\nBest CV RMSE: {-param_search.best_score_:.3f}")
# Create model with best parameters
best_xgb = XGBRegressor(
**param_search.best_params_,
random_state=RANDOM_SEED,
# tree_method='hist',
enable_categorical=True
)
# Store CV results with RMSE in original scale
cv_results = pd.DataFrame({
'n_estimators': [params['n_estimators'] for params in param_search.cv_results_['params']],
'max_depth': [params['max_depth'] for params in param_search.cv_results_['params']],
'learning_rate': [params['learning_rate'] for params in param_search.cv_results_['params']],
'subsample': [params['subsample'] for params in param_search.cv_results_['params']],
'colsample_bytree': [params['colsample_bytree'] for params in param_search.cv_results_['params']],
'min_child_weight': [params['min_child_weight'] for params in param_search.cv_results_['params']],
'gamma': [params['gamma'] for params in param_search.cv_results_['params']],
'rmse': -param_search.cv_results_['mean_test_score'], # Already in original scale from custom scorer
'std': param_search.cv_results_['std_test_score']
})
cv_results = cv_results.sort_values('rmse').reset_index(drop=True)
# Display all results in a clean table format
print("\nAll parameter combinations sorted by RMSE:")
pd.set_option('display.float_format', lambda x: '%.4f' % x)
print(cv_results.head(10).to_string(index=False))
# Fit final model with transformed target
best_xgb.fit(X_train, y_train)
# Plot feature importances
feature_importance = pd.DataFrame({
'feature': feature_names,
'importance': best_xgb.feature_importances_
})
feature_importance = feature_importance.sort_values('importance', ascending=False)
plt.figure(figsize=(12, 6))
sns.barplot(data=feature_importance.head(30), x='importance', y='feature')
plt.title('Top Most Important Features')
plt.xlabel('Feature Importance')
plt.tight_layout()
plt.show()
global model_results_df
model_results_df = add_model_results(model_results_df, -param_search.best_score_, cv_results['std'][0], 'XGBoost')
return best_xgb
best_xgb = xgboost_fit()
Fitting 5 folds for each of 100 candidates, totalling 500 fits
Best parameters found:
colsample_bylevel: 0.8255860368075345
colsample_bytree: 0.9362839943511725
gamma: 0.08920432871205619
grow_policy: lossguide
learning_rate: 0.04473101822758285
max_delta_step: 0
max_depth: 6
max_leaves: 24
min_child_weight: 4
n_estimators: 493
reg_alpha: 0.01
reg_lambda: 0.01
sampling_method: uniform
subsample: 0.799376879571623
tree_method: hist
Best CV RMSE: 0.276
All parameter combinations sorted by RMSE:
n_estimators max_depth learning_rate subsample colsample_bytree min_child_weight gamma rmse std
493 6 0.0447 0.7994 0.9363 4 0.0892 0.2755 0.0078
600 11 0.0249 0.7406 0.8159 2 0.1620 0.2801 0.0086
878 4 0.1192 0.7524 0.8206 1 0.1648 0.2869 0.0071
230 9 0.2377 0.8721 0.9895 9 0.2328 0.2892 0.0065
914 5 0.2485 0.6331 0.7084 5 0.1332 0.2919 0.0125
504 9 0.2810 0.6499 0.6276 5 0.0773 0.2922 0.0093
260 5 0.1691 0.7976 0.9895 9 0.2839 0.2928 0.0080
404 8 0.0204 0.9290 0.8315 6 0.4385 0.2937 0.0088
599 7 0.1701 0.6560 0.9323 3 0.3452 0.2939 0.0082
324 11 0.0120 0.7350 0.6002 5 0.3526 0.2944 0.0080

XGBoost Model Performance Overview
Best CV RMSE: 0.276, with low standard deviation (0.0065–0.0125) across folds, indicating consistent predictions.
Model Parameters:
Moderate tree depth (6) and max leaves (24) balance complexity.
Small learning rate (0.0447) ensures gradual optimization, with high
n_estimators
(493) compensating for slower learning.
Feature Importance:
Top predictors include nutrients (e.g., Cu, Zn, Niacin), showing strong physiological links to iron.
Features from the Dairy/Egg cluster that was created using OpenAI’s embeddings suggest our food sub-groups add predictive value.
Embedding vectors (embed_0–embed_7) provide additional, albeit lower, contributions.
The model effectively combines nutrient correlations and food category semantics, achieving robust performance.
Neural Network#
To begin, we used RandomizedSearchCV to identify patterns in parameter combinations that performed well. This method quickly sampled a wide range of parameter values, offering insights into promising regions of the parameter space.
Next, we planned to apply GridSearchCV to systematically explore these promising neighborhoods in more detail, fine-tuning the parameters for optimal performance. This two-step approach balances breadth and precision in hyperparameter optimization.
from sklearn.model_selection import KFold, RandomizedSearchCV, GridSearchCV
from sklearn.base import BaseEstimator, RegressorMixin
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
np.random.seed(RANDOM_SEED)
tf.random.set_seed(RANDOM_SEED)
# Disable GPU
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
tf.config.set_visible_devices([], 'GPU')
# Optional: Configure CPU threads
# tf.config.threading.set_inter_op_parallelism_threads(8) # Number of parallel operations
# tf.config.threading.set_intra_op_parallelism_threads(8) # Number of threads for operations
class KerasRegressorWrapper(BaseEstimator, RegressorMixin):
def __init__(self, hidden_layers=(64, 32), activation='relu',
learning_rate=0.001, dropout_rate=0.2,
batch_size=32, epochs=100):
self.hidden_layers = hidden_layers
self.activation = activation
self.learning_rate = learning_rate
self.dropout_rate = dropout_rate
self.batch_size = batch_size
self.epochs = epochs
def create_model(self):
inputs = keras.Input(shape=(self.n_features_,))
# First hidden layer with batch normalization
x = keras.layers.Dense(self.hidden_layers[0], activation=self.activation)(inputs)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.Dropout(self.dropout_rate)(x)
# Hidden layers with batch normalization
for units in self.hidden_layers[1:]:
x = keras.layers.Dense(units, activation=self.activation)(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.Dropout(self.dropout_rate)(x)
outputs = keras.layers.Dense(1)(x)
model = keras.Model(inputs=inputs, outputs=outputs)
optimizer = keras.optimizers.AdamW(
learning_rate=self.learning_rate,
weight_decay=0.01,
beta_1=0.9,
beta_2=0.999,
amsgrad=True
)
model.compile(optimizer=optimizer, loss='mse', metrics=['mae'])
return model
def plot_training_history(self):
"""Plot training history including loss and metrics."""
import matplotlib.pyplot as plt
# Create figure with subplots
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
# Plot training & validation loss
ax1.plot(self.history_.history['loss'], label='Training Loss')
ax1.plot(self.history_.history['val_loss'], label='Validation Loss')
ax1.set_title('Model Loss (MSE)')
ax1.set_xlabel('Epoch')
ax1.set_ylabel('Loss')
ax1.legend(loc='upper right')
ax1.grid(True)
# Plot training & validation MAE
ax2.plot(self.history_.history['mae'], label='Training MAE')
ax2.plot(self.history_.history['val_mae'], label='Validation MAE')
ax2.set_title('Model MAE')
ax2.set_xlabel('Epoch')
ax2.set_ylabel('MAE')
ax2.legend(loc='upper right')
ax2.grid(True)
plt.tight_layout()
return fig
def fit(self, X, y, validation_split=0.2, plot_history=True):
self.n_features_ = X.shape[1]
self.model_ = self.create_model()
# Early stopping callback
early_stopping = keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=10,
restore_best_weights=True,
mode='min'
)
# Learning rate reduction callback
reduce_lr = keras.callbacks.ReduceLROnPlateau(
monitor='val_loss',
factor=0.2,
patience=5,
min_lr=1e-6,
mode='min'
)
checkpoint = keras.callbacks.ModelCheckpoint(
'best_model_checkpoint.keras', # New format
monitor='val_loss',
save_best_only=True,
mode='min',
verbose=0
)
y = np.array(y)
# Training history
self.history_ = self.model_.fit(
X, y,
epochs=self.epochs,
batch_size=self.batch_size,
validation_split=validation_split,
callbacks=[early_stopping, reduce_lr, checkpoint],
# verbose=2 # Changed to 1 to show progress bar
verbose=0 # Changed to 1 to show progress bar
)
# Plot training history if requested
if plot_history:
self.plot_training_history()
return self
def predict(self, X):
return self.model_.predict(X, verbose=0).flatten()
def nn_fit():
param_grid = {
'hidden_layers': [
# (256, 128, 64), # Wider network
# (128, 128, 128), # Uniform width
# (512, 256, 128, 64), # Very wide with gradual reduction
# (128, 64, 32, 16), # Pyramid structure
# (64, 64, 64, 64), # Thin uniform
# (256, 64, 32), # Sharp reduction
(128, 64, 32, 16), # Gradual reduction
(128, 64, 32, 16, 8 ), # Gradual reduction
(128, 128, 64, 32, 16), # Gradual reduction
(128, 128, 64, 64, 32, 16), # Gradual reduction
(128, 128, 64, 64, 32, 32, 16, 16), # Gradual reduction
],
'activation': ['selu', 'elu', 'relu'], # Testing different modern activations
# 'activation': ['selu', ], # Testing different modern activations
# 'learning_rate': [0.001, 0.003, 0.01], # Log-scale sampling
'learning_rate': [0.01], # Log-scale sampling
'dropout_rate': [0.001, 0.005, 0.01], # Cover common ranges
# 'dropout_rate': [0.05, 0.1, 0.2], # Cover common ranges
# 'dropout_rate': [0.2], # Cover common ranges
# 'batch_size': [32, 64, 128], # Powers of 2
'batch_size': [32, 64, 128 ], # Powers of 2
'epochs': [200] # Increased since we added early stopping
# 'epochs': [300] # Increased since we added early stopping
}
nn_model = KerasRegressorWrapper()
param_search = RandomizedSearchCV(
estimator=nn_model,
param_distributions=param_grid,
n_iter=n_iter,
cv=kfold.split(X_train, strat_train),
scoring='neg_root_mean_squared_error',
# n_jobs=-1, # CPU
n_jobs=16, # GPU
verbose=1
)
param_search.fit(X_train, y_train)
print("\nBest parameters found:")
for param, value in param_search.best_params_.items():
print(f"{param}: {value}")
print(f"\nBest CV RMSE: {-param_search.best_score_:.3f}")
# Train final model with best parameters and plot history
best_model = KerasRegressorWrapper(**param_search.best_params_)
best_model.fit(X_train, y_train, plot_history=True)
cv_results = pd.DataFrame({
'hidden_layers': [params['hidden_layers'] for params in param_search.cv_results_['params']],
'activation': [params['activation'] for params in param_search.cv_results_['params']],
'learning_rate': [params['learning_rate'] for params in param_search.cv_results_['params']],
'dropout_rate': [params['dropout_rate'] for params in param_search.cv_results_['params']],
'batch_size': [params['batch_size'] for params in param_search.cv_results_['params']],
'epochs': [params['epochs'] for params in param_search.cv_results_['params']],
'rmse': -param_search.cv_results_['mean_test_score'],
'std': param_search.cv_results_['std_test_score']
})
cv_results = cv_results.sort_values('rmse').reset_index(drop=True)
print("\nAll parameter combinations sorted by RMSE:")
pd.set_option('display.float_format', lambda x: '%.4f' % x)
print(cv_results.head(10).to_string(index=False))
global model_results_df
model_results_df = add_model_results(
model_results_df,
-param_search.best_score_,
cv_results['std'][0],
'Neural Network'
)
return best_model
best_nn = nn_fit()
Fitting 5 folds for each of 100 candidates, totalling 500 fits
2024-12-09 05:01:30.617234: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.630475 2637498 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.633965: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.633964: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
E0000 00:00:1733720490.634301 2637498 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.637663: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.641251: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.641589: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.646553: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.647234 2637474 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720490.647238 2637501 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.649169: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.649169: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.649423: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.649905 2637480 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720490.651059 2637474 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
E0000 00:00:1733720490.651660 2637501 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.653487 2637497 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720490.653680 2637480 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.654278: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.654706 2637494 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.655058: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
E0000 00:00:1733720490.657244 2637497 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.658132: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
E0000 00:00:1733720490.658516 2637494 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.660947: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.660947: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.662557 2637476 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720490.662557 2637482 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.663328: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:30.665767: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
E0000 00:00:1733720490.666412 2637476 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
E0000 00:00:1733720490.666412 2637482 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.668040: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.668362 2637490 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.669280: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.669618 2637477 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.670291 2637475 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.670619: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
E0000 00:00:1733720490.672152 2637490 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
E0000 00:00:1733720490.674030 2637475 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.674155 2637478 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720490.674155 2637483 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.674597 2637487 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.674717: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
E0000 00:00:1733720490.675364 2637477 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
E0000 00:00:1733720490.677974 2637478 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
E0000 00:00:1733720490.678751 2637483 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.678819: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:30.679633: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
E0000 00:00:1733720490.680583 2637487 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.684053: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:30.684388: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:30.686026: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.688142 2637485 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720490.691985 2637485 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.692114: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:30.695432: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:30.695811: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720490.697550 2637496 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-12-09 05:01:30.697863: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
E0000 00:00:1733720490.701398 2637496 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:30.704275: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:30.713594: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
I0000 00:00:1733720494.242166 2637498 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14832 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
I0000 00:00:1733720494.256997 2637480 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 12936 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
I0000 00:00:1733720494.262558 2637475 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14212 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.268099: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.48GiB (15553134592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.270957 2637474 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 12936 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.272763: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.04GiB (13997820928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.280747 2637497 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 12626 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.280890: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.88GiB (14902755328 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.281109: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.49GiB (13412478976 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.281284: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.24GiB (12071230464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.281959 2637494 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 12632 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.283399: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.12GiB (10864107520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.284883: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.11GiB (9777696768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.285220: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.63GiB (13564706816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.285908 2637477 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 12320 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.286386: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.20GiB (8799927296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.286892: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.38GiB (7919934464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.287354: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.64GiB (7127940608 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.287617: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.97GiB (6415146496 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.287905: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.38GiB (5773631488 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.288082: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.84GiB (5196268032 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.289129 2637482 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 12254 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.289218: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.37GiB (12208235520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.289259: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.63GiB (13564706816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.291415: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.35GiB (4676641280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.291475: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.23GiB (10987411456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.291646: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.92GiB (4208977152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.291691: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.37GiB (12208235520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.291767: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.21GiB (9888669696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.291869: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.53GiB (3788079360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.293545: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.29GiB (8899802112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.295064: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.46GiB (8009821696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.295134: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.23GiB (10987411456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.297558: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.21GiB (9888669696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.298043: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.71GiB (7208839168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.298158: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.29GiB (8899802112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.300150: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.17GiB (3409271296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.302158: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.46GiB (8009821696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.302208: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.33GiB (13239582720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.304183: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.10GiB (11915624448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.304281: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.04GiB (6487954944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.304358: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.99GiB (10724062208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.305755: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.44GiB (5839159296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.305777: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.99GiB (9651655680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.306577: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.71GiB (7208839168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.306710: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.04GiB (6487954944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.308085: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.44GiB (5839159296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.312073: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.86GiB (3068344064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.312811: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.89GiB (5255243264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.312873: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.89GiB (5255243264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313000: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.40GiB (4729718784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313131: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.03GiB (12918652928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313194: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.40GiB (4729718784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313211: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.96GiB (4256746752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313394: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.83GiB (11626786816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313439: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.96GiB (4256746752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313456: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.57GiB (3831072000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313636: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.75GiB (10464107520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313681: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.57GiB (3831072000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313698: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.21GiB (3447964672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313871: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.77GiB (9417696256 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313917: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.21GiB (3447964672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.313934: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.89GiB (3103168000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314136: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.89GiB (8475926528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314182: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.89GiB (3103168000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314198: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.60GiB (2792851200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314363: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.10GiB (7628333568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314408: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.60GiB (2792851200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314424: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.34GiB (2513565952 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314589: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.39GiB (6865500160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314634: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.34GiB (2513565952 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314650: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.11GiB (2262209280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314811: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.75GiB (6178950144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314855: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.11GiB (2262209280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.314871: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.90GiB (2035988480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315038: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.18GiB (5561055232 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315098: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.90GiB (2035988480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315113: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.71GiB (1832389632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315268: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.66GiB (5004949504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315283: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.54GiB (1649150720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315331: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.71GiB (1832389632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315479: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.19GiB (4504454656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315495: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.38GiB (1484235776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315542: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.54GiB (1649150720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315689: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.78GiB (4054009088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315706: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.24GiB (1335812352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315752: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.38GiB (1484235776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315901: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.40GiB (3648608000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315913: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.12GiB (1202231040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.315959: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.24GiB (1335812352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316046: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.01GiB (1082008064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316149: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.06GiB (3283747072 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316307: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.12GiB (1202231040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316322: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 928.69MiB (973807360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316466: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.97GiB (12849446912 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316538: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.75GiB (2955372288 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316580: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.01GiB (1082008064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316597: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 835.83MiB (876426752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316843: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.77GiB (11564502016 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316870: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.48GiB (2659834880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316887: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 752.24MiB (788784128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.316935: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 928.69MiB (973807360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.317239: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.34GiB (13245874176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.317287: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.69GiB (10408051712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.317425: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.23GiB (2393851392 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.317478: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.09GiB (8686489600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.317499: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 677.02MiB (709905920 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.317926 2637496 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 11824 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.318521: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 835.83MiB (876426752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.318838: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.10GiB (11921286144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.318871: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 752.24MiB (788784128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319160: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.72GiB (9367245824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319211: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.99GiB (10729157632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319250: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 677.02MiB (709905920 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319478: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.85GiB (8430520832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319531: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.99GiB (9656241152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319569: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 609.32MiB (638915328 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319823: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.07GiB (7587468800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319874: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.09GiB (8690616320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.319910: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 548.38MiB (575023872 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320249: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.36GiB (6828721664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320294: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.28GiB (7821554688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320334: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 493.55MiB (517521664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320547: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.72GiB (6145849344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320592: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.56GiB (7039398912 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320630: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 444.19MiB (465769472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320840: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.15GiB (5531264512 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320886: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.90GiB (6335458816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.320922: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 399.77MiB (419192576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.321421: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.64GiB (4978138112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.321459: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.31GiB (5701912576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.321654: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.17GiB (4480324096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.321725: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.78GiB (5131721216 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.321935: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.75GiB (4032291584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.321973: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.30GiB (4618548736 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.322239: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.01GiB (2154466304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.322296: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.38GiB (3629062400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.322322: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.57GiB (2761509632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.323556: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.87GiB (4156693760 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.323584: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.28GiB (7817840640 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.323615: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 609.32MiB (638915328 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.323772: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.81GiB (1939019776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.323830: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.48GiB (3741024256 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.323852: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.55GiB (7036056576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.323877: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 548.38MiB (575023872 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324009: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.62GiB (1745117696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324105: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.13GiB (3366921728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324121: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.90GiB (6332450816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324172: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 493.55MiB (517521664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324182: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.46GiB (1570605824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324352: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.31GiB (5699205632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324364: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.32GiB (1413545216 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324425: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.82GiB (3030229504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324442: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 444.19MiB (465769472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324605: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.78GiB (5129285120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324618: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.18GiB (1272190720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324655: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 399.77MiB (419192576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324703: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.54GiB (2727206400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324799: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.30GiB (4616356352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324811: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.07GiB (1144971776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.324888: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 359.80MiB (377273344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325014: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.29GiB (2454485760 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325054: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.87GiB (4154720512 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325068: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 982.74MiB (1030474752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325186: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 323.82MiB (339546112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325384: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.06GiB (2209037056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325403: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.48GiB (3739248384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325416: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 884.46MiB (927427328 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.325473: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 291.43MiB (305591552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.326832: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.85GiB (1988133376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.326849: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.13GiB (3365323520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.326861: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 796.02MiB (834684672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.327524: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 262.29MiB (275032576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330094: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 236.06MiB (247529472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330101: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.04GiB (3266156032 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330160: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.67GiB (1789319936 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330222: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 212.46MiB (222776576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330333: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.74GiB (2939540224 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330397: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 191.21MiB (200498944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330392: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.50GiB (1610387968 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330516: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 172.09MiB (180449280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330576: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.46GiB (2645586176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330615: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.35GiB (1449349120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330681: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 154.88MiB (162404352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330785: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.22GiB (2381027584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330846: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 139.39MiB (146163968 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330840: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.21GiB (1304414208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.330964: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 125.45MiB (131547648 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331036: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.00GiB (2142924800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331082: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.09GiB (1173972736 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331154: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 112.91MiB (118393088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331255: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.80GiB (1928632320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331317: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1007.63MiB (1056575488 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331321: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 101.62MiB (106553856 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331444: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 91.46MiB (95898624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331500: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.62GiB (1735769088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331532: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 906.87MiB (950917888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331598: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 82.31MiB (86308864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331698: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.45GiB (1562192128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331759: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 74.08MiB (77678080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331755: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 816.18MiB (855826176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331874: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 66.67MiB (69910272 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331929: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.31GiB (1405972992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.331968: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 734.56MiB (770243584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.332065: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.31GiB (2485358592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.332131: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.82GiB (3028791040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.332174: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.18GiB (1265375744 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.332251: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 716.42MiB (751216384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.332307: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 661.11MiB (693219328 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333163: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 60.00MiB (62919424 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333301: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.54GiB (2725911808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333323: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 54.00MiB (56627712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333377: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.06GiB (1138838272 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333444: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.28GiB (2453320448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333504: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 48.60MiB (50964992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333605: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 977.47MiB (1024954624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333648: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.06GiB (2207988224 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.333807: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 43.74MiB (45868544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334128: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 879.73MiB (922459136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334142: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.85GiB (1987189504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334162: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 39.37MiB (41281792 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334189: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.08GiB (2236822784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334374: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 791.75MiB (830213376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334388: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.67GiB (1788470528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334413: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 35.43MiB (37153792 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334429: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.87GiB (2013140480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334603: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 712.58MiB (747192064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.334615: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.50GiB (1609623552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.335187: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 31.89MiB (33438464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.335219: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.69GiB (1811826432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.335415: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 641.32MiB (672472832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.335428: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.35GiB (1448661248 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.337021: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 28.70MiB (30094848 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.337159: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 644.77MiB (676094720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.338202: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.52GiB (1630643712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.338766: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.37GiB (1467579392 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340053: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 25.83MiB (27085568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340067: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.23GiB (1320821504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340160: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 580.30MiB (608485376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340176: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 23.25MiB (24377088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340200: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.11GiB (1188739328 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340281: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 522.27MiB (547636992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340302: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 20.92MiB (21939456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340326: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1020.30MiB (1069865472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340405: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 470.04MiB (492873472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340420: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 18.83MiB (19745536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340446: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 918.27MiB (962878976 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340526: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 423.04MiB (443586304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340540: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.95MiB (17771008 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340565: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 826.45MiB (866591232 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340648: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 380.73MiB (399227648 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340661: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 15.25MiB (15994112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340687: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 743.80MiB (779932160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340767: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 342.66MiB (359304960 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340781: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.73MiB (14394880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340806: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 669.42MiB (701938944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340884: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 308.39MiB (323374592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340899: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.35MiB (12955392 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.340924: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 602.48MiB (631745024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341013: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 277.55MiB (291037184 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341028: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.12MiB (11660032 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341053: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 542.23MiB (568570624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341131: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 249.80MiB (261933568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341146: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.01MiB (10494208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341171: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 488.01MiB (511713536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341249: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 224.82MiB (235740416 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341263: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.01MiB (9444864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341290: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 439.21MiB (460542208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341369: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 202.34MiB (212166400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341383: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.11MiB (8500480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341408: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 395.29MiB (414488064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341501: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 182.10MiB (190949888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341554: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 355.76MiB (373039360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341629: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 163.89MiB (171855104 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341649: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 320.18MiB (335735552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341731: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 595.00MiB (623897600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341758: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 147.50MiB (154669824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.341778: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 288.16MiB (302162176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.343016: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 132.75MiB (139203072 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.343979 2637478 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 11218 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.344019: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 577.19MiB (605225728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344048: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.21GiB (1303795200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344098: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 535.50MiB (561507840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344236: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 519.47MiB (544703232 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344242: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.09GiB (1173415680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344570: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 481.95MiB (505357056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344750: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1007.15MiB (1056074240 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344839: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 467.52MiB (490233088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.344946: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.55GiB (12398493696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.345001: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 433.75MiB (454821376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.345064: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 906.44MiB (950466816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.347081: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 259.35MiB (271945984 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.347340: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 390.38MiB (409339392 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.347381: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 233.41MiB (244751616 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.348952: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 119.48MiB (125282816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.348970: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 210.07MiB (220276480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.349888: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 420.77MiB (441209856 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.349984: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 815.79MiB (855420160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.349978: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.39GiB (11158643712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.351859: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 378.69MiB (397089024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.351866: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 734.21MiB (769878272 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.351923: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.35GiB (10042778624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352000: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 660.79MiB (692890624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352096: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 340.82MiB (357380352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352161: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.42GiB (9038500864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352167: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 594.71MiB (623601664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352447: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 306.74MiB (321642496 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352455: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 535.24MiB (561241600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352512: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.58GiB (8134650368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352574: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 481.72MiB (505117440 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352664: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 276.07MiB (289478400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352728: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.82GiB (7321185280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352739: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 433.55MiB (454605824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352874: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 248.46MiB (260530688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352884: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 390.19MiB (409145344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.352937: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.14GiB (6589066752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353007: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 351.17MiB (368230912 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353116: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 223.62MiB (234477824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353181: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.52GiB (5930160128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353197: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 351.34MiB (368405504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353212: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 316.05MiB (331407872 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353395: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 201.25MiB (211030272 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353438: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.97GiB (5337143808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353449: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 316.21MiB (331565056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353475: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 284.45MiB (298267136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353744: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 181.13MiB (189927424 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353786: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.47GiB (4803429376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353798: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 284.58MiB (298408704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353815: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 256.00MiB (268440576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.353972: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 163.02MiB (170934784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.354022: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.03GiB (4323086336 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.354032: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 256.13MiB (268568064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.354080: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 230.40MiB (241596672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.356542: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 207.36MiB (217437184 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.357885: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 186.63MiB (195693568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.359379: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 167.96MiB (176124416 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.359430: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 146.71MiB (153841408 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.361020: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 107.53MiB (112754688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.361084: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.62GiB (3890777600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.361136: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 96.78MiB (101479424 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.361145: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 189.06MiB (198248960 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.361173: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 230.51MiB (241711360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.365606: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 151.17MiB (158512128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.365652: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 132.04MiB (138457344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.368093: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 170.16MiB (178424064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.368140: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.26GiB (3501699840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.368144: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 207.46MiB (217540352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.368227: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 87.10MiB (91331584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.368963: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 136.05MiB (142661120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.369036: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 118.84MiB (124611840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.369673: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 122.45MiB (128395008 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.370232: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 106.96MiB (112150784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.370848: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 110.20MiB (115555584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.372825: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.96GiB (11763056640 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.374520: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.86GiB (10586750976 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.374544: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 153.14MiB (160581888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.375580: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 96.26MiB (100935936 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.375588: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 99.18MiB (104000256 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.375618: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 186.72MiB (195786496 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.375725: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 89.26MiB (93600256 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.375769: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 86.63MiB (90842368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.375786: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 168.04MiB (176207872 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.376501: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 80.34MiB (84240384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.377286: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 77.97MiB (81758208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.377304: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 151.24MiB (158587136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.377994: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 72.30MiB (75816448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.378776: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 70.17MiB (73582592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.378794: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 136.12MiB (142728448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.379552: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 65.07MiB (68235008 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.379554: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.93GiB (3151529728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.380406: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 63.16MiB (66224384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.380435: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 78.39MiB (82198528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.380688 2637490 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 11152 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.381240: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 122.50MiB (128455680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.382807: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 110.25MiB (115610112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.384537: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 99.23MiB (104049152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.386250: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.87GiB (9528075264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.387084: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 58.57MiB (61411584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.387121: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 137.83MiB (144523776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.387125: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 89.31MiB (93644288 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.387221: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.64GiB (2836376576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.389161: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.99GiB (8575267328 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.390874: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.19GiB (7717740544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.393021: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.47GiB (6945966080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.394020: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 56.84MiB (59602176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.394047: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 70.55MiB (73978880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.394077: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 52.71MiB (55270656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.394135: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 80.38MiB (84280064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.397665: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.82GiB (6251369472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.398915: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.24GiB (5626232320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.400161: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 124.05MiB (130071552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.400816: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.72GiB (5063608832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.402361: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.38GiB (2552738816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403294: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 51.16MiB (53641984 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403345: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.24GiB (4557248000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403351: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 111.64MiB (117064448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403487: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 46.04MiB (48278016 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403499: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 100.48MiB (105358080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403556: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.82GiB (4101523200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403612: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 90.43MiB (94822400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403692: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 41.44MiB (43450368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403782: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 81.39MiB (85340160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.403782: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.44GiB (3691370752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404071: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 37.29MiB (39105536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404083: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 73.25MiB (76806144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404139: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.09GiB (3322233600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404195: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 65.92MiB (69125632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404274: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 33.56MiB (35195136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404346: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 59.33MiB (62213120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404348: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.78GiB (2990010112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404475: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 30.21MiB (31675648 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404508: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 53.40MiB (55991808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404564: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.51GiB (2691009024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404689: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 27.19MiB (28508160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.404705: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 48.06MiB (50392832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.405342: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.25GiB (2421907968 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.405445: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.89GiB (11693850624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.405472: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 24.47MiB (25657344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.405523: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 43.25MiB (45353728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.407061: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 63.50MiB (66580992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.407090: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.80GiB (10524465152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.408327: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 47.44MiB (49743616 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.408754: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 57.15MiB (59922944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.409652: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 72.34MiB (75852288 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410305: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.03GiB (2179717120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410327: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.02MiB (23091712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410385: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.82GiB (9472018432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410435: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 38.93MiB (40818432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410587: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.83GiB (1961745408 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410654: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.94GiB (8524816384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410665: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 35.03MiB (36736768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410819: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.64GiB (1765570816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410828: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.14GiB (7672334336 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.410840: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 31.53MiB (33063168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411006: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 42.70MiB (44769280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411021: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.43GiB (6905100800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411032: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 28.38MiB (29756928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411085: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.48GiB (1589013760 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411175: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.79GiB (6214590464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411246: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 25.54MiB (26781440 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411236: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 38.43MiB (40292352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411321: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.33GiB (1430112512 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411376: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.21GiB (5593131520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411387: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.99MiB (24103424 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411450: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 34.58MiB (36263168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411542: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.20GiB (1287101184 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411576: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.69GiB (5033818112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411589: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 20.69MiB (21693184 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411619: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 31.12MiB (32636928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411776: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.08GiB (1158391040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411783: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.22GiB (4530436096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411792: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 18.62MiB (19524096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411826: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 28.01MiB (29373440 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411950: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.80GiB (4077392384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.411958: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.76MiB (17571840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412021: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 994.25MiB (1042552064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412033: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 25.21MiB (26436096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412115: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 15.08MiB (15814656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412173: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.42GiB (3669652992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412240: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 894.83MiB (938296832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412246: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.69MiB (23792640 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412258: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.57MiB (14233344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412388: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.08GiB (3302687488 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412399: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.22MiB (12810240 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412431: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 20.42MiB (21413376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412478: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 805.35MiB (844467200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412555: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.77GiB (2972418560 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412564: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.00MiB (11529216 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412617: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 18.38MiB (19272192 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412832: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 724.81MiB (760020480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412902: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.49GiB (2675176704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412913: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.90MiB (10376448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.412939: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.54MiB (17345024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413130: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 652.33MiB (684018432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413140: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.24GiB (2407659008 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413149: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.91MiB (9338880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413174: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.89MiB (15610624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413278: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.02MiB (8404992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413307: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.02GiB (2166893056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413317: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.40MiB (14049792 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413365: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 587.10MiB (615616768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413422: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.21MiB (7564544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413481: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.82GiB (1950203648 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413492: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.06MiB (12644864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413557: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 528.39MiB (554055168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413597: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.49MiB (6808320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413625: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.63GiB (1755183360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413634: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.85MiB (11380480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413741: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.84MiB (6127616 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413786: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 475.55MiB (498649856 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413798: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.47GiB (1579665152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413807: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.77MiB (10242560 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413916: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.26MiB (5515008 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413961: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 427.99MiB (448784896 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413974: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.32GiB (1421698560 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.413983: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.79MiB (9218304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.414393: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.14GiB (2297464832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.414423: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.73MiB (4963584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.414894: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 385.20MiB (403906560 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416006: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.19GiB (1279528704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416231: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.91MiB (8296704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416264: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 51.43MiB (53930752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416325: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.12MiB (7467264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416363: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.26MiB (4467456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416411: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 46.29MiB (48537856 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416455: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.41MiB (6720768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416469: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.83MiB (4020736 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416552: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 41.66MiB (43684096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416577: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.77MiB (6048768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416592: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.45MiB (3618816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416678: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 37.49MiB (39315712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416703: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.19MiB (5444096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416716: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.11MiB (3257088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416802: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 33.75MiB (35384320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416827: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.67MiB (4899840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416841: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.79MiB (2931456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416928: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 30.37MiB (31845888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416958: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.21MiB (4409856 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.416970: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.52MiB (2638336 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417066: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 27.33MiB (28661504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417081: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.26MiB (2374656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417108: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.79MiB (3969024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417186: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 24.60MiB (25795584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417200: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.04MiB (2137344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417225: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.41MiB (3572224 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417332: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.14MiB (23216128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417399: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.07MiB (3215104 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417446: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 19.93MiB (20894720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417527: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.76MiB (2893824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417536: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 17.93MiB (18805248 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417649: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.14MiB (16924928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417702: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.92GiB (2067718400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417718: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.48MiB (2604544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417764: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.53MiB (15232512 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417918: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.73GiB (1860946688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.417931: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.24MiB (2344192 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.418258: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.07MiB (13709312 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.418618: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.01MiB (2109952 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.418658: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.56GiB (1674852096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.419070: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.77MiB (12338432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.419193: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.81MiB (1899008 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.419254: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.59MiB (11104768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.419252: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.40GiB (1507366912 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.419689: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.63MiB (1709312 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.419938: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.53MiB (9994496 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.419987: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.26GiB (1356630272 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.420461: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.47MiB (1538560 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.420513: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.58MiB (8995072 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.420608: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.14GiB (1220967168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.420643: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.32MiB (1384704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.420972: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.72MiB (8095744 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.421359: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 65.10MiB (68267264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.421437: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.02GiB (1098870528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.421467: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.19MiB (1246464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.421862: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 346.68MiB (363515904 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422644: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.95MiB (7286272 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422732: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.07GiB (1151575808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422735: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.25MiB (6557696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422766: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 58.59MiB (61440768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422819: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.63MiB (5902080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422878: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 988.41MiB (1036418304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422917: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 52.73MiB (55296768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.422929: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.07MiB (5312000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423033: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 889.56MiB (932776448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423043: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.56MiB (4780800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423070: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 47.46MiB (49767168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423159: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 800.61MiB (839498752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423168: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.10MiB (4302848 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423195: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 42.71MiB (44790528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423280: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 720.55MiB (755548928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423289: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.69MiB (3872768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423316: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 38.44MiB (40311552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423403: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 648.49MiB (679994112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423412: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.32MiB (3485696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423439: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 34.60MiB (36280576 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423527: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 583.64MiB (611994880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423536: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.99MiB (3137280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423561: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 31.14MiB (32652544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423646: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 525.28MiB (550795520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423656: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.69MiB (2823680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423683: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 28.03MiB (29387520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423734: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.42MiB (2541312 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423787: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 472.75MiB (495716096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423828: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 25.22MiB (26448896 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423840: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.18MiB (2287360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423931: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 425.48MiB (446144512 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.423974: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.70MiB (23804160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.424104: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 382.93MiB (401530112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.424444: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 20.43MiB (21423872 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.424766: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 344.64MiB (361377280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.425219: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 18.39MiB (19281664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.425296: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 310.17MiB (325239552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.425341: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.55MiB (17353728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.425739: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 279.16MiB (292715776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.426077: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.89MiB (15618560 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.426613: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 251.24MiB (263444224 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.426636: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.41MiB (14056704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.426770: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 226.12MiB (237100032 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.426811: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 943.17MiB (988983552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.427140: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.06MiB (12651264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.427987: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.07MiB (1122048 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.428552: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 848.85MiB (890085376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.428565: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 203.50MiB (213390080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.429095: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 763.97MiB (801076992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.429112: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 183.15MiB (192051200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.429512: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 687.57MiB (720969472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.429528: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 164.84MiB (172846080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.430276: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 618.81MiB (648872704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.430293: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 148.35MiB (155561472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.430382: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 556.93MiB (583985408 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.430714: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 133.52MiB (140005376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.431022: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 501.24MiB (525586944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.431474: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 120.17MiB (126004992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.431986: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 451.11MiB (473028352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.432022: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 108.15MiB (113404672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.432666: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 406.00MiB (425725696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.433007: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 97.34MiB (102064384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.433492: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.86MiB (11386368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.433827: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 312.01MiB (327164416 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.435298: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 365.40MiB (383153152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.435798: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.77MiB (10247936 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.435834: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 328.86MiB (344837888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.435894: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.80MiB (9223168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.436286: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 295.98MiB (310354176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.436298: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.92MiB (8301056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.437041: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 266.38MiB (279318784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.437054: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.12MiB (7471104 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.437158: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 239.74MiB (251387136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.437173: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.41MiB (6724096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.437814: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 215.77MiB (226248448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.437827: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.77MiB (6051840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.438324: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 194.19MiB (203623680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.438337: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.19MiB (5446656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.438756: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 174.77MiB (183261440 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.438771: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.67MiB (4902144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.439074: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 87.60MiB (91858176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.439779: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 157.29MiB (164935424 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.439835: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.21MiB (4412160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.439883: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 986.2KiB (1009920 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.445955: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 78.84MiB (82672384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.445968: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 141.57MiB (148442112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.445953: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 280.81MiB (294448128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.452559: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 70.96MiB (74405376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.454662: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 127.41MiB (133597952 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.455170: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 114.67MiB (120238336 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.455174: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 252.73MiB (265003520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.455544: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 63.86MiB (66964992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.455563: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 103.20MiB (108214528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.455996: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 227.45MiB (238503168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.456054: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 57.48MiB (60268544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.456108: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 92.88MiB (97393152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.456188: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 204.71MiB (214652928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.456469: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 51.73MiB (54241792 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.456488: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 83.59MiB (87653888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.456979: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 184.24MiB (193187840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457006: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 46.56MiB (48817664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457026: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 75.23MiB (78888704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457417: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 165.81MiB (173869056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457429: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 41.90MiB (43936000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457449: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 67.71MiB (71000064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457889: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 37.71MiB (39542528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457927: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 149.23MiB (156482304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.457940: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 60.94MiB (63900160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.461973: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 134.31MiB (140834304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.461998: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 33.94MiB (35588352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.462021: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 54.85MiB (57510144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.462136: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 120.88MiB (126750976 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.462149: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 30.55MiB (32029696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.462167: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 49.36MiB (51759360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.462712: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 108.79MiB (114075904 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.462725: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 27.49MiB (28826880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.462743: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 44.42MiB (46583552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.463206: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 97.91MiB (102668544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.463219: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 24.74MiB (25944320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.463237: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 39.98MiB (41925376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.464039: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 88.12MiB (92401920 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.464050: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.27MiB (23350016 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.464280: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 35.98MiB (37732864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.464438: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 79.31MiB (83161856 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.464893: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 20.04MiB (21015040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.464911: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 32.39MiB (33959680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.468033: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 71.38MiB (74845696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720494.469804 2637487 gpu_device.cc:2022] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 17 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:41:00.0, compute capability: 8.9
2024-12-09 05:01:34.470512: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 18.04MiB (18913536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.471057: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 29.15MiB (30563840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.471762: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.23MiB (17022208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.472287: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 26.23MiB (27507456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.473157: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.61MiB (15320064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.473677: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 64.24MiB (67361280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.476802: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.15MiB (13788160 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.477801: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.83MiB (12409344 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.478790: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.65MiB (11168512 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.479599: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 23.61MiB (24756736 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.479612: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.59MiB (10051840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.483773: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.63MiB (9046784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.485009: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.76MiB (8142336 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.485680: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 57.82MiB (60625152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.487017: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.99MiB (7328256 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.487878: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 21.25MiB (22281216 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.488351: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 19.12MiB (20053248 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.488488: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 52.04MiB (54562816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.488758: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 17.21MiB (18048000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.489253: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 46.83MiB (49106688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.489272: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 15.49MiB (16243200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.489608: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.94MiB (14618880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.489983: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 42.15MiB (44196096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.490430: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.55MiB (13157120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.490537: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 37.93MiB (39776512 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.490577: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.29MiB (11841536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.491180: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.16MiB (10657536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.491220: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 34.14MiB (35799040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.491607: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.15MiB (9591808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.491715: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 30.73MiB (32219136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.492372: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.23MiB (8632832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.494741: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 27.65MiB (28997376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.495444: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 24.89MiB (26097664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.496832: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.40MiB (23488000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.498272: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.29MiB (6595584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.498309: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 20.16MiB (21139200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.498331: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.41MiB (7769600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.499881: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.66MiB (5936128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.499975: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 18.14MiB (19025408 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.499993: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.09MiB (5342720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500099: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.67MiB (6992640 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500133: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.33MiB (17123072 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500148: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.58MiB (4808448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500238: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.00MiB (6293504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500273: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.70MiB (15410944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500289: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.13MiB (4327680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500398: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.40MiB (5664256 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500508: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.23MiB (13870080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500642: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 17.12MiB (17956864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500668: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.86MiB (5097984 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500721: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.90MiB (12483072 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500848: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.38MiB (4588288 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500918: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 15.41MiB (16161280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.500936: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.71MiB (11234816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501257: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.94MiB (4129536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501293: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.64MiB (10111488 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501409: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.87MiB (14545152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501506: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.54MiB (3716608 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501591: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.68MiB (9100544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501663: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.48MiB (13090816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501702: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.19MiB (3345152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501735: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.81MiB (8190720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501905: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.87MiB (3010816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501899: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.24MiB (11781888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.501940: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.03MiB (7371776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502066: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.58MiB (2709760 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502129: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.11MiB (10603776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502150: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.33MiB (6634752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502265: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.33MiB (2438912 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502326: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.10MiB (9543424 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502348: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.69MiB (5971456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502464: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.09MiB (2195200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502523: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.19MiB (8589312 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502546: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.12MiB (5374464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502659: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.88MiB (1975808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502727: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.37MiB (7730432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502748: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.61MiB (4837120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502865: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.70MiB (1778432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502924: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.63MiB (6957568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.502947: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.15MiB (4353536 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503071: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.53MiB (1600768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503132: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.97MiB (6262016 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503154: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.74MiB (3918336 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503269: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.37MiB (1440768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503327: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.37MiB (5635840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503351: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.36MiB (3526656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503462: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.24MiB (1296896 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503519: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.84MiB (5072384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503541: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.03MiB (3174144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503652: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.11MiB (1167360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503705: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.35MiB (4565248 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503729: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.72MiB (2856960 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503837: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.00MiB (1050624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503899: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.92MiB (4108800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.503919: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.45MiB (2571264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.504033: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 923.5KiB (945664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.504564: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.53MiB (3697920 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.505069: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.21MiB (2314240 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.507075: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.17MiB (3328256 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.507678: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.86MiB (2995456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.507739: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.99MiB (2082816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.508153: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.57MiB (2695936 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.508564: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.79MiB (1874688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.508727: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.31MiB (2426368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.508776: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.61MiB (1687296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.509603: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.08MiB (2183936 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.509618: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.45MiB (1518592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.509813: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.30MiB (1366784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.510201: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 831.2KiB (851200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.510193: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.87MiB (1965568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.512335: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 748.2KiB (766208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.512418: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 673.5KiB (689664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.512826: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 606.2KiB (620800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.512901: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.69MiB (1769216 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.513379: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 545.8KiB (558848 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.513443: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.52MiB (1592320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.513503: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 491.2KiB (503040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.514218: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 442.2KiB (452864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.514218: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.37MiB (1433088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.514825: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 398.2KiB (407808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.514895: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.23MiB (1289984 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.515724: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.17MiB (1230336 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.516619: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 358.5KiB (367104 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.517225: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.11MiB (1161216 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.517306: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.06MiB (1107456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.517473: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1020.8KiB (1045248 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.517491: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 973.5KiB (996864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.518681: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 918.8KiB (940800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.518698: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 876.2KiB (897280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.529727: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 788.8KiB (807680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.539291: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 827.0KiB (846848 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.540732: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 322.8KiB (330496 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.541795: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 290.5KiB (297472 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.542508: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 261.5KiB (267776 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.542792: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 235.5KiB (241152 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:34.543243: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 212.0KiB (217088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.201548: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.55GiB (12398298112 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.201668: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.39GiB (11158467584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.201761: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.35GiB (10042620928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.201851: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.42GiB (9038358528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.201941: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.58GiB (8134522368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.202605: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.82GiB (7321070080 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.202970: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.14GiB (6588962816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.203477: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.52GiB (5930066432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.204442: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.97GiB (5337059840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.204785: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.47GiB (4803353600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205294: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.03GiB (4323018240 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205378: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.62GiB (3890716416 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205457: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.26GiB (3501644800 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205536: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.93GiB (3151480320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205615: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.64GiB (2836332288 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205692: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.38GiB (2552698880 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205770: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.14GiB (2297428992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205845: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.92GiB (2067686144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.205920: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.73GiB (1860917504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206002: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.56GiB (1674825728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206079: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.40GiB (1507343104 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206156: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.26GiB (1356608768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206230: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.14GiB (1220947968 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206306: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.02GiB (1098853120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206383: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 943.15MiB (988967936 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206458: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 848.84MiB (890071296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206533: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 763.95MiB (801064192 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206607: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 687.56MiB (720957952 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206681: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 618.80MiB (648862208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206753: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 556.92MiB (583976192 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206827: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 501.23MiB (525578752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206905: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 451.11MiB (473020928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.206979: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 406.00MiB (425719040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207059: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 365.40MiB (383147264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207134: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 328.86MiB (344832768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207208: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 295.97MiB (310349568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207285: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 266.38MiB (279314688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207359: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 239.74MiB (251383296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207433: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 215.76MiB (226245120 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207507: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 194.19MiB (203620608 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.207598: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 174.77MiB (183258624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.209092: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 157.29MiB (164932864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.209167: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 141.56MiB (148439808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.209242: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 127.41MiB (133595904 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.209314: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 114.67MiB (120236544 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.209386: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 103.20MiB (108212992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.209460: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 92.88MiB (97391872 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.209694: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 83.59MiB (87652864 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.210328: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 75.23MiB (78887680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211099: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 67.71MiB (70999040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211174: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 60.94MiB (63899136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211248: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 54.84MiB (57509376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211322: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 49.36MiB (51758592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211394: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 44.42MiB (46582784 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211466: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 39.98MiB (41924608 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211538: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 35.98MiB (37732352 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:35.211611: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 32.39MiB (33959168 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.243215: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720496.260257 2640944 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720496.265449 2640944 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:36.281853: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:36.794235: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.40MiB (17194496 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794337: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.76MiB (15475200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794389: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.28MiB (13927680 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794441: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.95MiB (12535040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794490: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.76MiB (11281664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794540: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.68MiB (10153728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794590: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.71MiB (9138432 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794640: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.84MiB (8224768 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794691: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.06MiB (7402496 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794740: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.35MiB (6662400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794790: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.72MiB (5996288 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794840: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.15MiB (5396736 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794889: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.63MiB (4857088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794938: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.17MiB (4371456 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.794988: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.75MiB (3934464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795048: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.38MiB (3541248 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795098: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.04MiB (3187200 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795147: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.74MiB (2868480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795196: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.46MiB (2581760 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795250: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.22MiB (2323712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795301: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.99MiB (2091520 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795350: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.79MiB (1882368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795401: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.62MiB (1694208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795452: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.45MiB (1524992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.795503: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.31MiB (1372672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:36.876225: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720496.893486 2640948 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720496.898745 2640948 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:36.915241: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720496.937435 2639421 service.cc:148] XLA service 0x7712280015c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720496.937472 2639421 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
2024-12-09 05:01:37.005086: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.015225 2639612 service.cc:148] XLA service 0x7c7a28003de0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.015258 2639612 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
2024-12-09 05:01:37.032869: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.95GiB (11762329600 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033018: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.86GiB (10586096640 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033118: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.87GiB (9527486464 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033218: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.99GiB (8574737408 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033310: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.19GiB (7717263360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033398: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.47GiB (6945537024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033486: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.82GiB (6250982912 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033575: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.24GiB (5625884672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033665: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.71GiB (5063296000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033754: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.24GiB (4556966400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033838: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.82GiB (4101269760 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.033922: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.44GiB (3691142656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034014: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.09GiB (3322028288 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034100: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.78GiB (2989825280 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034182: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.51GiB (2690842624 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034265: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.25GiB (2421758208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034346: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.03GiB (2179582208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034426: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.83GiB (1961624064 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034504: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.64GiB (1765461760 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034582: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.48GiB (1588915712 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034663: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.33GiB (1430024192 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034740: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.20GiB (1287021824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034817: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.08GiB (1158319616 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034897: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 994.19MiB (1042487808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.034975: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 894.77MiB (938238976 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035078: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 805.30MiB (844415232 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035172: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 724.77MiB (759973888 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035261: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 652.29MiB (683976704 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035353: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 587.06MiB (615579136 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035443: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 528.36MiB (554021376 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035531: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 475.52MiB (498619392 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035622: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 427.97MiB (448757504 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035709: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 385.17MiB (403881728 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035796: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 346.65MiB (363493632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035886: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 311.99MiB (327144448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.035974: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 280.79MiB (294430208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036074: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 252.71MiB (264987392 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036164: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 227.44MiB (238488832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036251: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 204.70MiB (214640128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036337: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 184.23MiB (193176320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036423: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 165.80MiB (173858816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036510: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 149.22MiB (156473088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036597: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 134.30MiB (140825856 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036683: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 120.87MiB (126743296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036770: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 108.78MiB (114068992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036856: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 97.91MiB (102662144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.036940: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 88.12MiB (92396032 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037035: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 79.30MiB (83156480 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037121: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 71.37MiB (74840832 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037207: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 64.24MiB (67356928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037293: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 57.81MiB (60621312 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037377: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 52.03MiB (54559232 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037460: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 46.83MiB (49103360 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037545: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 42.15MiB (44193024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037628: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 37.93MiB (39773952 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037709: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 34.14MiB (35796736 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037790: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 30.72MiB (32217088 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037874: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 27.65MiB (28995584 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.037957: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 24.89MiB (26096128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038048: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 22.40MiB (23486720 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038131: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 20.16MiB (21138176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038213: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 18.14MiB (19024384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038293: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 16.33MiB (17122048 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038378: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 14.70MiB (15409920 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038439: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 13.23MiB (13869056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038499: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.90MiB (12482304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038557: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.71MiB (11234304 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038616: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.64MiB (10110976 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038674: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.68MiB (9100032 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038732: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.81MiB (8190208 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038790: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.03MiB (7371264 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038848: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.33MiB (6634240 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038906: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.69MiB (5970944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.038963: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.12MiB (5373952 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039030: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.61MiB (4836608 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039090: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.15MiB (4353024 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039149: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.74MiB (3917824 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039207: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.36MiB (3526144 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039264: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.03MiB (3173632 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039322: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.72MiB (2856448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039380: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.45MiB (2571008 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039442: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.21MiB (2313984 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039508: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.99MiB (2082816 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039574: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.79MiB (1874688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039641: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.61MiB (1687296 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:37.039702: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.45MiB (1518592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.040247 2639676 service.cc:148] XLA service 0x75650c007300 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.040267 2639676 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.040307 2639744 service.cc:148] XLA service 0x74c3a8017040 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.040332 2639744 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
2024-12-09 05:01:37.086050: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-12-09 05:01:37.100517: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-12-09 05:01:37.136275: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.219238 2639281 service.cc:148] XLA service 0x70dfd801d570 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.219276 2639281 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
E0000 00:00:1733720497.248573 2639421 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.248629 2639421 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.309437: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
E0000 00:00:1733720497.329851 2639612 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.329910 2639612 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.332005 2639214 service.cc:148] XLA service 0x7111e0002210 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.332043 2639214 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
E0000 00:00:1733720497.344902 2639676 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.344953 2639676 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
E0000 00:00:1733720497.418468 2639744 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.418515 2639744 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.428803: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
E0000 00:00:1733720497.437358 2639421 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.437407 2639421 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.448540: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:37.448606: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.460920 2639546 service.cc:148] XLA service 0x75d7f0020390 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.460958 2639546 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
2024-12-09 05:01:37.499663: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720497.516290 2640950 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720497.519667 2639612 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.519714 2639612 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
E0000 00:00:1733720497.521821 2640950 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
E0000 00:00:1733720497.527385 2639676 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.527428 2639676 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.530866: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:37.530926: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
2024-12-09 05:01:37.538724: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:37.539853: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:37.539915: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
2024-12-09 05:01:37.552576: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
E0000 00:00:1733720497.563823 2639281 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.563871 2639281 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.564704 2639153 service.cc:148] XLA service 0x79258401f740 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.564743 2639153 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
E0000 00:00:1733720497.593528 2639744 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.593574 2639744 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.603854: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:37.603913: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
2024-12-09 05:01:37.665143: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
E0000 00:00:1733720497.707597 2639214 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.707647 2639214 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.717595 2639495 service.cc:148] XLA service 0x70559801f250 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.717625 2639495 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.720941 2639383 service.cc:148] XLA service 0x7d5610007ed0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.720979 2639383 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.750978 2639082 service.cc:148] XLA service 0x7f3c80005070 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.751023 2639082 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
E0000 00:00:1733720497.786040 2639281 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.786083 2639281 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.796792: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:37.796853: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
2024-12-09 05:01:37.804392: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720497.810871 2639810 service.cc:148] XLA service 0x765cc40242c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720497.810909 2639810 service.cc:156] StreamExecutor device (0): NVIDIA GeForce RTX 4090, Compute Capability 8.9
2024-12-09 05:01:37.827984: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-12-09 05:01:37.832197: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
E0000 00:00:1733720497.840608 2639546 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.840659 2639546 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.897953: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-12-09 05:01:37.908055: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:37.908094: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:37.908100: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:37.908294: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:37.908320: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:37.908326: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
E0000 00:00:1733720497.926366 2639214 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.926412 2639214 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:37.938080: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:37.938140: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
E0000 00:00:1733720497.989012 2639153 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720497.989070 2639153 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
E0000 00:00:1733720498.117820 2639546 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720498.117873 2639546 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:38.130968: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:38.131039: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
E0000 00:00:1733720498.148944 2639082 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720498.148993 2639082 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
E0000 00:00:1733720498.154900 2639495 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720498.154954 2639495 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
E0000 00:00:1733720498.189091 2639383 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720498.189150 2639383 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
2024-12-09 05:01:38.228159: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720498.246112 2641120 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720498.249871 2639153 cuda_dnn.cc:534] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
E0000 00:00:1733720498.249914 2639153 cuda_dnn.cc:538] Memory usage: 9830400 bytes free, 25282281472 bytes total.
I0000 00:00:1733720498.251646 2639810 cuda_dnn.cc:529] Loaded cuDNN version 90500
E0000 00:00:1733720498.251949 2641120 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:38.262362: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
2024-12-09 05:01:38.262427: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.
[[{{node StatefulPartitionedCall}}]]
2024-12-09 05:01:38.268972: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
I0000 00:00:1733720498.408240 2639082 cuda_dnn.cc:529] Loaded cuDNN version 90500
I0000 00:00:1733720498.424422 2639495 cuda_dnn.cc:529] Loaded cuDNN version 90500
I0000 00:00:1733720498.452471 2639383 cuda_dnn.cc:529] Loaded cuDNN version 90500
2024-12-09 05:01:38.560343: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:38.560370: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:38.560376: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:38.560492: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:38.560510: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:38.560515: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:39.168205: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:39.168239: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:39.168246: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:39.168433: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:39.168456: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:39.168460: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:39.278455: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720499.295587 2642160 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720499.300762 2642160 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:39.317145: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:39.917513: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:39.917540: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:39.917546: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:39.917649: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:39.917667: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:39.917672: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:40.054081: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720500.071928 2642489 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720500.077220 2642489 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:40.094367: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:40.851582: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720500.869399 2642691 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720500.874626 2642691 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:40.891573: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:40.896399: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:40.896426: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:40.896432: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:40.896538: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:40.896556: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:40.896562: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:40.954029: I external/local_xla/xla/stream_executor/cuda/cuda_asm_compiler.cc:397] ptxas warning : Registers are spilled to local memory in function 'loop_add_maximum_reduce_subtract_fusion', 4 bytes spill stores, 4 bytes spill loads
I0000 00:00:1733720500.987540 2639082 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
2024-12-09 05:01:40.990093: I external/local_xla/xla/stream_executor/cuda/cuda_asm_compiler.cc:397] ptxas warning : Registers are spilled to local memory in function 'loop_add_maximum_reduce_subtract_fusion', 4 bytes spill stores, 4 bytes spill loads
2024-12-09 05:01:41.014782: I external/local_xla/xla/stream_executor/cuda/cuda_asm_compiler.cc:397] ptxas warning : Registers are spilled to local memory in function 'loop_add_maximum_reduce_subtract_fusion', 4 bytes spill stores, 4 bytes spill loads
I0000 00:00:1733720501.025178 2639383 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
2024-12-09 05:01:41.028620: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.33GiB (13238673664 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.028799: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 11.10GiB (11914805248 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.028954: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.99GiB (10723324928 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.029116: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.99GiB (9650992128 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.029270: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.09GiB (8685892608 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.029424: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.28GiB (7817303040 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.029574: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.55GiB (7035572736 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.029720: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.90GiB (6332015104 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.029866: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.31GiB (5698813440 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.030017: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.78GiB (5128931840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.030162: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.30GiB (4616038400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.030312: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.87GiB (4154434560 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.030455: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.48GiB (3738991104 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.030595: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.13GiB (3365091840 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.030734: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.82GiB (3028582656 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.030875: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.54GiB (2725724416 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.031020: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.28GiB (2453152000 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.031159: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.06GiB (2207836672 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.031298: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.85GiB (1987053056 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:41.031432: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 1.67GiB (1788347648 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
I0000 00:00:1733720501.049285 2639495 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
2024-12-09 05:01:41.054613: I external/local_xla/xla/stream_executor/cuda/cuda_asm_compiler.cc:397] ptxas warning : Registers are spilled to local memory in function 'loop_add_maximum_reduce_subtract_fusion', 4 bytes spill stores, 4 bytes spill loads
I0000 00:00:1733720501.092689 2639810 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
2024-12-09 05:01:41.688376: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-12-09 05:01:41.698253: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:41.698286: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:41.698291: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:41.698482: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:41.698504: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:41.698509: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720501.706180 2642765 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720501.711430 2642765 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:41.728070: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:42.527101: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:42.527129: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:42.527135: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:42.527263: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:42.527281: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:42.527286: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:42.534732: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720502.552202 2643036 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720502.557380 2643036 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:42.574150: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:43.443083: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720503.461975 2644218 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720503.467430 2644218 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:43.478985: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:43.479031: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:43.479037: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:43.479227: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:43.479249: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:43.479255: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:43.484930: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:44.421369: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:44.421406: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:44.421412: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:44.421594: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:44.421616: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:44.421621: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:44.432094: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720504.457918 2646082 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720504.464631 2646082 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:44.484024: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:45.251402: W external/local_xla/xla/tsl/framework/bfc_allocator.cc:306] Allocator (GPU_0_bfc) ran out of memory trying to allocate 16.00MiB with freed_by_count=0. The caller indicates that this is not a failure, but this may mean that there could be performance gains if more memory were available.
2024-12-09 05:01:45.256362: W tensorflow/core/framework/op_kernel.cc:1841] OP_REQUIRES failed at xla_ops.cc:577 : RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16778112 bytes.
2024-12-09 05:01:45.256407: I tensorflow/core/framework/local_rendezvous.cc:405] Local rendezvous is aborting with status: RESOURCE_EXHAUSTED: Out of memory while trying to allocate 16778112 bytes.
[[{{node StatefulPartitionedCall}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
2024-12-09 05:01:45.269693: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:45.269732: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:45.269740: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:45.269888: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:45.269908: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:45.269913: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:45.492246: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720505.510826 2650122 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720505.516402 2650122 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:45.529438: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 12.03GiB (12916594176 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.529682: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 10.83GiB (11624934400 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.529911: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 9.74GiB (10462440448 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.530140: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 8.77GiB (9416196096 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.530342: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.89GiB (8474576384 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.530515: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 7.10GiB (7627118592 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.530682: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 6.39GiB (6864406528 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.530875: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.75GiB (6177965568 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.531062: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 5.18GiB (5560168960 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.531259: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.66GiB (5004151808 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.531428: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 4.19GiB (4503736320 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.531620: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.77GiB (4053362688 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.531840: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.40GiB (3648026368 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.532099: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 3.06GiB (3283223552 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.532284: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.75GiB (2954900992 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.532483: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.48GiB (2659410944 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.532705: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.23GiB (2393469696 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.532897: I external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:1193] failed to allocate 2.01GiB (2154122752 bytes) from device: RESOURCE_EXHAUSTED: : CUDA_ERROR_OUT_OF_MEMORY: out of memory
2024-12-09 05:01:45.535978: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:46.489618: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:46.489667: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:46.489675: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:46.489884: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:46.489918: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:46.489923: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:47.530066: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:47.530110: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:47.530116: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:47.530338: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:47.530368: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:47.530374: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:47.534138: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720507.560496 2666525 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720507.568398 2666525 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:47.592583: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:01:49.652197: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:01:49.652248: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:01:49.652257: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:01:49.652488: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:01:49.652530: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:01:49.652537: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:01:59.627578: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720519.648842 2803072 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720519.654667 2803072 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:01:59.673603: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:02:01.315788: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720521.339717 2821167 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720521.347500 2821167 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:02:01.368237: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:02:01.749559: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:02:01.749602: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:02:01.749609: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:02:01.749825: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:02:01.749856: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:02:01.749863: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:02:02.528334: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720522.547567 2833445 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720522.552782 2833445 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:02:02.572484: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:02:03.417418: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:02:03.417472: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:02:03.417481: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:02:03.417709: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:02:03.417750: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:02:03.417759: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:02:04.620810: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:02:04.620866: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:02:04.620876: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:02:04.621111: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:02:04.621158: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:02:04.621166: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:02:55.453125: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720575.473586 3479336 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720575.479535 3479336 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:02:55.497952: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:02:57.446269: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:02:57.446307: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:02:57.446313: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:02:57.446536: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:02:57.446566: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:02:57.446572: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:01.411553: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720581.434073 3553478 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720581.441235 3553478 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:01.462895: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:03.481235: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:03.481265: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:03.481271: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:03.481450: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:03.481470: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:03.481475: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:04.563237: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720584.587684 3593183 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720584.593081 3593183 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:04.612260: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:06.423256: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720586.443763 3611066 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720586.449239 3611066 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:06.467487: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:06.667392: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:06.667435: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:06.667443: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:06.667639: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:06.667667: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:06.667672: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:08.295348: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720588.314866 3627488 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720588.321175 3627488 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:08.340475: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:08.421429: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:08.421490: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:08.421499: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:08.421714: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:08.421757: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:08.421765: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:10.024640: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720590.044046 3640389 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720590.049574 3640389 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:10.068122: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:10.240873: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:10.240909: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:10.240915: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:10.241130: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:10.241156: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:10.241161: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:11.744124: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720591.762767 3652516 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720591.768306 3652516 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:11.786282: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:11.943437: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:11.943483: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:11.943491: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:11.943691: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:11.943722: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:11.943730: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:13.464560: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720593.484880 3665501 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720593.490869 3665501 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:13.509779: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:13.601542: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:13.601570: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:13.601575: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:13.601705: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:13.601725: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:13.601730: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:15.251453: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720595.270264 3677935 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720595.276355 3677935 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:15.294689: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:15.367255: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:15.367310: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:15.367319: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:15.367534: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:15.367573: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:15.367581: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:17.087661: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:17.087687: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:17.087694: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:17.087807: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:17.087826: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:17.087830: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:17.167607: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720597.185246 3692328 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720597.190603 3692328 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:17.207907: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:18.958121: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:18.958148: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:18.958153: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:18.958275: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:18.958293: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:18.958298: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:19.034055: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720599.052951 3704820 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720599.058959 3704820 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:19.081540: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:20.875977: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720600.897642 3716241 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720600.903710 3716241 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:20.923601: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:20.991637: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:20.991695: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:20.991705: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:20.991921: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:20.991955: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:20.991961: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:22.565448: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720602.584731 3731050 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720602.590614 3731050 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:22.608678: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:22.866828: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:22.866861: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:22.866868: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:22.867065: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:22.867099: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:22.867107: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:24.366045: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720604.385571 3745704 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720604.391214 3745704 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:24.409883: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:24.511226: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:24.511258: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:24.511264: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:24.511455: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:24.511483: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:24.511488: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:26.277387: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720606.297471 3760564 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720606.303301 3760564 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:26.323164: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:26.362578: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:26.362621: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:26.362627: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:26.362843: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:26.362882: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:26.362891: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:28.254150: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:28.254202: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:28.254210: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:28.254421: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:28.254457: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:28.254463: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:03:36.797709: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720616.821380 3887321 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720616.827657 3887321 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:03:36.847514: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:03:38.748752: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:03:38.748793: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:03:38.748801: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:03:38.749035: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:03:38.749067: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:03:38.749074: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:16.333489: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720656.353302 127712 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720656.358740 127712 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:16.376568: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:18.362814: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:18.362860: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:18.362866: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:18.363091: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:18.363127: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:18.363135: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:28.460870: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720668.484598 260544 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720668.491349 260544 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:28.512724: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:30.589810: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720670.611381 286275 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720670.617338 286275 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:30.632298: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:30.632345: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:30.632351: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:30.632571: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:30.632616: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:30.632623: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:30.635888: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:32.673854: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:32.673905: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:32.673915: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:32.674143: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:32.674186: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:32.674195: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:36.123338: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720676.148670 364957 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720676.154748 364957 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:36.173075: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:38.136386: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:38.136429: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:38.136437: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:38.136639: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:38.136672: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:38.136679: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:38.598468: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720678.616860 393688 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720678.622461 393688 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:38.640306: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:40.148755: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720680.167541 403677 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720680.172827 403677 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:40.190058: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:40.374770: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:40.374798: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:40.374805: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:40.374949: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:40.374971: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:40.374976: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:41.800696: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720681.819887 410579 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720681.825406 410579 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:41.844050: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:41.915907: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:41.915952: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:41.915960: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:41.916194: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:41.916224: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:41.916229: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:43.719915: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:43.719957: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:43.719965: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:43.720188: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:43.720220: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:43.720226: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:43.776589: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720683.796892 418656 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720683.802687 418656 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:43.821168: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:45.648983: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720685.670343 431549 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720685.676696 431549 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:45.695309: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:45.814095: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:45.814141: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:45.814148: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:45.814350: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:45.814380: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:45.814385: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:47.553296: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720687.575449 447264 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720687.581492 447264 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:47.599865: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:47.691311: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:47.691350: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:47.691356: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:47.691568: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:47.691600: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:47.691607: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:49.462964: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720689.481797 464208 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720689.487473 464208 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:49.505753: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:49.699352: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:49.699391: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:49.699397: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:49.699606: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:49.699644: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:49.699651: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:51.485418: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720691.506744 482796 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720691.512654 482796 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:51.528287: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:51.528325: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:51.528333: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:51.528526: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:51.528553: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:51.528558: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:51.531919: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:53.531692: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:53.531736: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:53.531743: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:53.531940: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:53.531971: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:53.531976: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:53.690121: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720693.711074 500685 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720693.717419 500685 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:53.737201: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:55.787778: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:55.787827: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:55.787833: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:55.788050: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:55.788078: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:55.788083: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:55.986383: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720696.007416 520325 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720696.013422 520325 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:56.031610: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:04:58.041344: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:04:58.041391: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:04:58.041397: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:04:58.041602: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:04:58.041630: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:04:58.041635: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:04:58.120068: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720698.142297 541295 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720698.148544 541295 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:04:58.167830: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:05:00.193476: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:05:00.193518: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:05:00.193526: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:05:00.193712: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:05:00.193736: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:05:00.193743: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:05:00.941684: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720700.960612 576315 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720700.965971 576315 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:05:00.985587: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:05:02.929977: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:05:02.930023: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:05:02.930030: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:05:02.930229: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:05:02.930251: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:05:02.930255: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:05:38.923316: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720738.940441 1015020 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720738.945688 1015020 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:05:38.962711: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:05:40.700959: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720740.721786 1018414 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720740.728192 1018414 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:05:40.748982: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:05:40.814626: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:05:40.814664: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:05:40.814672: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:05:40.814887: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:05:40.814921: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:05:40.814933: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:05:42.843223: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:05:42.843275: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:05:42.843284: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:05:42.843519: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:05:42.843557: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:05:42.843563: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:07.082562: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720767.101905 1288147 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720767.107806 1288147 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:07.125822: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:08.728792: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720768.746524 1302028 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720768.752035 1302028 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:08.770194: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:08.906756: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:08.906783: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:08.906789: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:08.906914: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:08.906934: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:08.906939: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:10.785828: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:10.785883: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:10.785890: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:10.786099: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:10.786134: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:10.786138: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:10.974491: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720770.994819 1319071 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720771.000804 1319071 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:11.020155: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:13.038109: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:13.038150: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:13.038157: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:13.038347: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:13.038376: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:13.038381: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:13.105258: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720773.124716 1339095 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720773.130211 1339095 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:13.148441: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:14.963929: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720774.983485 1355204 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720774.989057 1355204 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:15.007899: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:15.163584: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:15.163621: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:15.163627: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:15.163842: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:15.163870: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:15.163878: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:16.739635: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720776.761053 1373167 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720776.766649 1373167 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:16.786170: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:17.059232: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:17.059275: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:17.059282: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:17.059487: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:17.059517: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:17.059523: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:18.740235: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:18.740289: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:18.740298: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:18.740508: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:18.740544: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:18.740551: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:18.951653: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720778.977901 1396941 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720778.985860 1396941 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:19.007240: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:21.072590: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:21.072634: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:21.072640: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:21.072861: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:21.072893: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:21.072898: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:23.396846: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720783.423276 1444600 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720783.430187 1444600 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:23.451003: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:25.405585: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720785.426344 1465232 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720785.432754 1465232 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:25.452926: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:25.517645: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:25.517690: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:25.517699: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:25.517915: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:25.517956: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:25.517962: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:27.510806: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:27.510840: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:27.510847: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:27.511047: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:27.511070: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:27.511075: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:27.736322: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720787.754928 1499280 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720787.760641 1499280 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:27.779862: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:29.648324: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720789.670021 1525160 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720789.677624 1525160 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:29.697963: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:29.752458: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:29.752490: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:29.752497: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:29.752660: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:29.752681: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:29.752686: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:31.374761: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720791.394626 1540737 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720791.400407 1540737 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:31.418574: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:31.538141: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:31.538176: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:31.538182: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:31.538368: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:31.538392: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:31.538397: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:33.380125: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:33.380175: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:33.380184: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:33.380397: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:33.380427: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:33.380433: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:33.942506: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720793.963938 1557457 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720793.970333 1557457 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:33.990387: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:35.998515: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:35.998567: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:35.998576: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:35.998799: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:35.998840: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:35.998847: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:06:44.542122: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720804.561905 1687783 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720804.567585 1687783 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:06:44.585566: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:06:46.600334: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:06:46.600381: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:06:46.600390: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:06:46.600613: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:06:46.600654: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:06:46.600660: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:08.539081: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720828.559779 1932042 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720828.565475 1932042 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:08.584954: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:10.515401: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720830.535308 1940533 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720830.540778 1940533 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:10.560182: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:10.655044: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:10.655094: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:10.655101: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:10.655305: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:10.655345: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:10.655350: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:12.578076: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:12.578112: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:12.578118: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:12.578332: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:12.578357: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:12.578361: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:36.388663: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720856.410044 2245941 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720856.416328 2245941 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:36.435216: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:38.361795: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:38.361833: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:38.361839: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:38.362071: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:38.362106: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:38.362111: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:46.258894: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720866.279593 2343951 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720866.286377 2343951 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:46.309959: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:48.246898: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:48.246934: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:48.246940: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:48.247144: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:48.247170: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:48.247178: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:48.430041: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720868.451684 2366662 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720868.457684 2366662 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:48.477868: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:50.468468: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:50.468511: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:50.468518: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:50.468738: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:50.468764: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:50.468769: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:50.971673: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720870.993558 2382977 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720871.000032 2382977 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:51.019265: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:53.071784: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:53.071828: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:53.071836: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:53.072063: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:53.072104: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:53.072114: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:53.254473: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720873.278270 2403185 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720873.285356 2403185 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:53.306648: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:55.302975: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720875.323817 2420088 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720875.329863 2420088 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:55.349797: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:55.377244: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:55.377284: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:55.377290: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:55.377481: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:55.377508: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:55.377513: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:57.330009: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:07:57.330048: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:07:57.330055: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:07:57.330251: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:07:57.330278: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:07:57.330283: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:07:58.297870: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720878.319245 2444103 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720878.325389 2444103 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:07:58.345038: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:07:59.983322: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720880.005064 2457409 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720880.011221 2457409 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:08:00.029929: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:08:00.372922: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:00.372977: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:00.372986: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:00.373220: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:00.373260: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:00.373267: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:08:01.893450: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720881.914431 2471739 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720881.920174 2471739 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:08:01.939637: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:08:02.022625: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:02.022668: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:02.022674: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:02.022875: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:02.022912: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:02.022918: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:08:03.735092: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720883.757223 2482952 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720883.762802 2482952 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:08:03.781788: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:08:03.805104: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:03.805154: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:03.805164: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:03.805379: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:03.805410: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:03.805415: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:08:05.770511: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:05.770546: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:05.770552: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:05.770743: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:05.770767: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:05.770771: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:08:05.872913: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720885.892763 2497810 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720885.898768 2497810 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:08:05.917606: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:08:07.729180: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720887.754002 2511555 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720887.760929 2511555 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:08:07.780862: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:08:07.802190: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:07.802237: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:07.802246: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:07.802471: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:07.802511: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:07.802520: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:08:09.521503: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720889.541322 2528486 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720889.547729 2528486 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:08:09.567662: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:08:09.847314: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:09.847361: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:09.847371: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:09.847576: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:09.847605: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:09.847610: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:08:11.530059: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:11.530094: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:11.530100: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:11.530287: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:11.530311: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:11.530321: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
2024-12-09 05:08:11.706744: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1733720891.732997 2552560 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1733720891.740829 2552560 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-09 05:08:11.764395: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-09 05:08:13.753176: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2024-12-09 05:08:13.753215: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 9bb81e261c39
2024-12-09 05:08:13.753223: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 9bb81e261c39
2024-12-09 05:08:13.753441: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 560.35.3
2024-12-09 05:08:13.753468: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 560.35.3
2024-12-09 05:08:13.753473: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:259] kernel version seems to match DSO: 560.35.3
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1733720929.515708 2945627 service.cc:148] XLA service 0x7aaf90007030 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1733720929.515755 2945627 service.cc:156] StreamExecutor device (0): Host, Default Version
2024-12-09 05:08:49.584468: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
I0000 00:00:1733720931.120887 2945627 device_compiler.h:188] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.
Best parameters found:
learning_rate: 0.01
hidden_layers: (128, 128, 64, 32, 16)
epochs: 200
dropout_rate: 0.005
batch_size: 32
activation: relu
Best CV RMSE: 0.299
All parameter combinations sorted by RMSE:
hidden_layers activation learning_rate dropout_rate batch_size epochs rmse std
(128, 128, 64, 32, 16) relu 0.0100 0.0050 32 200 0.2993 0.0143
(128, 64, 32, 16) relu 0.0100 0.0050 64 200 0.3001 0.0116
(128, 64, 32, 16) selu 0.0100 0.0100 128 200 0.3005 0.0082
(128, 128, 64, 64, 32, 32, 16, 16) elu 0.0100 0.0050 64 200 0.3008 0.0075
(128, 128, 64, 32, 16) elu 0.0100 0.0010 64 200 0.3016 0.0128
(128, 128, 64, 64, 32, 32, 16, 16) relu 0.0100 0.0100 32 200 0.3025 0.0144
(128, 128, 64, 64, 32, 16) relu 0.0100 0.0010 32 200 0.3029 0.0123
(128, 64, 32, 16) elu 0.0100 0.0010 64 200 0.3030 0.0144
(128, 64, 32, 16) selu 0.0100 0.0010 32 200 0.3031 0.0116
(128, 64, 32, 16, 8) elu 0.0100 0.0010 32 200 0.3031 0.0171


The training graphs show clear signs of overfitting after epoch 10-20:
Training loss (MSE) continues decreasing steadily while validation loss plateaus around 0.09
Similar pattern in MAE metrics - training MAE keeps improving while validation MAE stagnates around 0.20
Growing gap between training and validation metrics indicates the model is memorizing training data rather than learning generalizable patterns
We implemented early stopping to prevent overfitting for this reason. Increating dropout rate may help.
BEST MODEL EVALUATION#
Evaluation Results#
model_results_df.sort_values('rmse_mean')
model_name | rmse_mean | rmse_std | |
---|---|---|---|
3 | Random Forest | 0.2653 | 0.0048 |
4 | XGBoost | 0.2755 | 0.0078 |
5 | Neural Network | 0.2993 | 0.0143 |
1 | SVR | 0.3009 | 0.0101 |
2 | Elastic Net | 0.3649 | 0.0108 |
0 | Linear Regression | 0.3649 | 0.0108 |
Model Performance Summary:
Top Performer:
Random Forest: Lowest RMSE (0.2653) and standard deviation (0.0048), showing superior accuracy and consistency.
Runner-Up:
XGBoost: Close second with RMSE of 0.2755.
Weaker Models:
Linear Regression and Elastic Net: Highest RMSE (0.3649), indicating poor fit.
Middle Tier:
Neural Network: RMSE of 0.2993, outperformed by tree-based models.
Conclusion:
Tree-based models, particularly Random Forest, excel in predicting iron content, outperforming both linear methods and deep learning. These results may vary if the notebook is re-evaluated.
top_model = best_rf # TODO store models in model_results_df and fetch best from there automatically
Performance Metrics#
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_percentage_error
def smape(y_true, y_pred, epsilon=1e-10):
return np.mean(2 * np.abs(y_pred - y_true) /
(np.abs(y_true) + np.abs(y_pred) + epsilon)) * 100
def me(y_true, y_pred):
return np.mean((y_true - y_pred))
def mae(y_true, y_pred):
return np.mean(np.abs(y_true - y_pred))
def adjusted_mpe(y_true, y_pred):
mean = np.mean(y_true)
return np.mean((y_true - y_pred) / mean) * 100
def adjusted_mape(y_true, y_pred):
mean = np.mean(y_true)
return np.mean(np.abs(y_true - y_pred) / mean) * 100
def adjusted_mpe2(y_true, y_pred):
median = np.median(y_true)
return np.median((y_true - y_pred) / median) * 100
def adjusted_mape2(y_true, y_pred):
median = np.median(y_true)
return np.median(np.abs(y_true - y_pred) / median) * 100
def adjusted_mpe3(y_true, y_pred):
median = np.median(y_true)
return np.mean((y_true - y_pred) / median) * 100
def adjusted_mape3(y_true, y_pred):
median = np.median(y_true)
return np.mean(np.abs(y_true - y_pred) / median) * 100
def mpe(y_true, y_pred, epsilon=1e-10):
return np.mean((y_true - y_pred) / (y_true + epsilon)) * 100
def evaluate_model(y_true, y_pred, is_log_scaled=False):
if is_log_scaled:
y_true_unscaled = np.expm1(y_true)
y_pred_unscaled = np.expm1(y_pred)
else:
y_true_unscaled = y_true
y_pred_unscaled = y_pred
metrics = {
'RMSE': np.sqrt(mean_squared_error(y_true_unscaled, y_pred_unscaled)),
'R²': r2_score(y_true_unscaled, y_pred_unscaled),
'ME': me(y_true_unscaled, y_pred_unscaled),
'MAE': mae(y_true_unscaled, y_pred_unscaled),
'aMPE': adjusted_mpe(y_true_unscaled, y_pred_unscaled),
'aMAPE': adjusted_mape(y_true_unscaled, y_pred_unscaled),
}
return metrics
Testset Performance#
def testset_performance():
# Make predictions
y_pred = top_model.predict(X_test)
# Evaluate model
top_model_metrics = evaluate_model(y_test, y_pred, is_log_scaled=True)
print("\nTop Model Test Set Performance:")
for metric, value in top_model_metrics.items():
print(f"{metric}: {value:.3f}")
testset_performance()
Top Model Test Set Performance:
RMSE: 3.551
R²: 0.487
ME: 0.376
MAE: 0.826
aMPE: 16.349
aMAPE: 35.954
The model demonstrates moderate predictive power:
R²: 0.487, explaining 49% of iron content variance.
RMSE: 3.551 mg, indicating typical prediction errors of ±3.5 mg per 100g.
Mean Error: 0.376, showing minor overestimation.
Mean Absolute Error: 0.826 mg, reflecting reasonable accuracy.
Mean Absolute Percentage Error: 35.954%, indicating higher relative errors for low iron foods.
The model performs well for general predictions but struggles with extreme values or fortified products, as highlighted in the gap analysis.
Food Group Performance#
def food_group_performance(model, X, y, food_groups):
# Make predictions
y_pred = model.predict(X)
# Initialize lists to store results
results = []
# Calculate metrics for each food group
for group in np.unique(food_groups):
mask = food_groups == group
# Skip groups with too few samples
if sum(mask) < 2:
continue
# Calculate metrics using evaluate_model function
group_metrics = evaluate_model(
y_true=y[mask],
y_pred=y_pred[mask],
is_log_scaled=True
)
# Save results
results.append({
'Food Group': group,
'Sample Size': sum(mask),
'R²': group_metrics['R²'],
'RMSE': group_metrics['RMSE'],
'mean': np.mean(y[mask]),
'ME': group_metrics['ME'],
'MAE': group_metrics['MAE'],
'aMPE': group_metrics['aMPE'],
'aMAPE': group_metrics['aMAPE'],
})
results_df = pd.DataFrame(results)
results_df = results_df.sort_values('RMSE')
numeric_cols = ['R²', 'RMSE', 'ME', 'MAE', 'aMPE', 'aMAPE', 'mean']
results_df[numeric_cols] = results_df[numeric_cols].round(3)
return results_df
indices = y_test.index
test_food_groups = imputed_food_rows.iloc[indices]['food_group']
print(f"\nTotal number of unique food groups: {len(np.unique(test_food_groups))}")
# Analyze performance
results_df = food_group_performance(
model=top_model,
X=X_test,
y=y_test,
food_groups=test_food_groups
)
# Display results
print("\nModel Performance by Food Group:")
print(results_df.to_string(index=False))
print(f"\nTotal samples analyzed: {results_df['Sample Size'].sum()}")
Total number of unique food groups: 25
Model Performance by Food Group:
Food Group Sample Size R² RMSE mean ME MAE aMPE aMAPE
Fats and Oils 18 0.6150 0.2100 0.2480 0.0170 0.1470 5.4350 46.0560
Dairy and Egg Products 35 0.8850 0.2300 0.2540 -0.0350 0.1410 -9.1920 37.1590
Meals, Entrees, and Side Dishes 16 0.8170 0.2610 0.8160 -0.0540 0.1960 -4.0230 14.7260
Restaurant Foods 20 0.6740 0.3570 0.7200 -0.0260 0.2550 -2.3340 22.5280
Soups, Sauces, and Gravies 36 0.6140 0.4520 0.4690 -0.1500 0.2850 -21.4220 40.6380
Fruits and Fruit Juices 62 0.4300 0.4540 0.4190 0.0580 0.2520 9.6430 41.7220
Fast Foods 46 0.5870 0.5280 0.9810 0.0510 0.2900 2.8400 16.1290
Poultry Products 55 0.8770 0.6420 1.0490 0.1660 0.4190 7.4340 18.7370
Baked Products 67 0.7440 0.7440 1.2190 0.1060 0.5180 3.9720 19.3350
Cereal Grains and Pasta 34 0.7440 1.1290 1.0860 0.0330 0.6210 1.2780 24.3070
Sausages and Luncheon Meats 25 0.5420 1.1920 0.8090 -0.0630 0.5980 -4.1490 39.1460
Beverages 44 -0.5910 1.3470 0.2870 -0.4290 0.5440 -82.9060 105.0180
Snacks 25 0.7170 1.3610 1.2020 0.2510 0.8130 9.1100 29.4510
Legumes and Legume Products 40 0.6230 1.4150 1.3220 -0.2450 0.8390 -7.6710 26.2230
Lamb, Veal, and Game Products 95 0.4470 1.6370 1.1610 0.3010 0.7510 11.6980 29.1570
Vegetables and Vegetable Products 144 0.5200 1.8290 0.7070 0.2440 0.4880 17.3340 34.6490
Finfish and Shellfish Products 53 0.2680 1.8700 0.7030 0.1060 0.8280 7.6930 59.8170
Pork Products 70 0.4750 1.9340 0.7050 0.1130 0.4290 8.7280 32.9900
Nut and Seed Products 26 0.6030 1.9580 1.6380 0.0190 1.3050 0.4020 27.0350
American Indian/Alaska Native Foods 20 0.5490 1.9650 0.9510 0.0940 1.1200 3.8420 45.9960
Sweets 27 0.4850 2.3730 0.5780 0.2640 0.8270 18.0420 56.5810
Beef Products 104 0.2240 3.3330 1.2560 0.4930 0.6240 16.8110 21.2900
Baby Foods 27 0.4820 11.0110 0.8510 3.3390 3.4610 57.7230 59.8340
Spices and Herbs 13 0.3790 14.1100 2.3000 5.8310 8.2860 33.9440 48.2360
Breakfast Cereals 9 -0.3250 23.8030 2.4810 13.5050 14.5210 65.1530 70.0570
Total samples analyzed: 1111
Key observations about the random forest our best iron nutrient prediction model:
Performance varies significantly across food groups, with R² ranging from -0.59 (Beverages) to 0.89 (Dairy and Egg Products).
Several concerning areas:
Breakfast Cereals and Beverages show negative R² values, indicating poor model fit
Large RMSE values for Breakfast Cereals (23.80), Spices/Herbs (14.11), and Baby Foods (11.01)
High error rates (aMAPE >50%) for Beverages, Baby Foods, Finfish/Shellfish, and Sweets
Best performing groups:
Dairy/Egg Products: R²=0.89, RMSE=0.23
Poultry Products: R²=0.88, RMSE=0.64
Meals/Entrees: R²=0.82, RMSE=0.26
Sample size distribution is uneven (9 to 144 samples), potentially affecting reliability for smaller groups like Breakfast Cereals.
USDA ASSUMPTION EVALUATION#
The USDA often uses calculations and estimates to fill in missing nutrient data when direct measurements are unavailable. Our model predicts iron content, allowing us to compare these predictions against USDA’s assumptions. Cases where our model’s predictions strongly diverge from USDA estimates are of particular interest, as they may highlight errors or overlooked factors in the current assumptions.
estimated_iron_mask = imputed_food_rows['source_type'].isin(['4', '7', '8', '9'])
estimated_iron_food_rows = imputed_food_rows[estimated_iron_mask]
X_calc, y_calc = apply_feature_selection_and_scaling(estimated_iron_food_rows)
y_usda = estimated_iron_food_rows["Iron, Fe"]
y_usda = np.array(y_usda)
# print(imputed_food_rows.shape)
# print(X_train.shape)
print(f"X_train.shape: {X_train.shape}")
# print(estimated_iron_food_rows.shape)
# print(X_calc.shape)
print(f"X_calc.shape: {X_calc.shape}")
print(f"Number of features: {len(feature_names)}")
print((feature_names))
# print(np.sort(feature_names))
y_model = np.expm1(top_model.predict(X_calc))
X_train.shape: (4440, 27)
X_calc.shape: (1810, 27)
Number of features: 27
['Ash', 'Calcium, Ca', 'Copper, Cu', 'Linoleic fatty acid', 'Magnesium, Mg', 'Niacin', 'Palmitoleic fatty acid', 'Phosphorus, P', 'Potassium, K', 'Riboflavin', 'Sodium, Na', 'Thiamin', 'Zinc, Zn', 'Oleic fatty acid', 'Protein', 'Water', 'embed_0', 'embed_1', 'embed_2', 'embed_3', 'embed_4', 'embed_5', 'embed_6', 'embed_7', 'cluster_Dairy and Egg Products', 'cluster_Dairy and Egg Products_1', 'cluster_nan']
Model vs Imputation: Prediction Gaps#
def prediction_gaps():
# Calculate differences and get sorted indices
differences = np.abs(y_model - y_usda)
sorted_indices = np.argsort(differences)
# Sort all arrays based on differences
y_model_sorted = y_model[sorted_indices]
y_usda_sorted = y_usda[sorted_indices]
x_sorted = np.arange(len(y_model))
plt.figure(figsize=(30, 10))
plt.plot(x_sorted, y_model_sorted, 'o', color='#FF6B6B', label='Model Predictions', markersize=2)
plt.plot(x_sorted, y_usda_sorted, 'o', color='#4ECDC4', label='USDA Assumptions', markersize=2)
# Add lines between corresponding points
for i in range(len(x_sorted)):
plt.plot([x_sorted[i], x_sorted[i]], [y_model_sorted[i], y_usda_sorted[i]], '-', color='gray', alpha=0.7, linewidth=1)
plt.grid(True, linestyle='--', alpha=0.7)
plt.xlabel('Food Index (sorted by USDA vs Model Gap)', fontsize=12)
plt.ylabel('Iron Content (mg per 100g Food Portion)', fontsize=12)
# plt.yscale('log1p')
plt.title('USDA vs Top Model Predictions', fontsize=14, pad=20)
plt.legend(fontsize=10, loc='upper left')
# Add a light background color
plt.gca().set_facecolor('#f8f9fa')
plt.grid(True, linestyle='--', alpha=0.7)
plt.tight_layout()
plt.show()
prediction_gaps()

This scatter plot comparing USDA assumptions and model predictions for iron content reveals several key patterns:
Divergence at Higher Iron Levels: In the rightmost third of the plot, where iron content is highest, the gap between USDA assumptions (red) and model predictions (blue) grows significantly.
Consistency for Low Iron Foods: For most foods (approximately the first 1,000 items), iron content remains below 5 units, with minimal differences between the two estimates.
Increased Variability at Higher Levels: At higher iron levels, the vertical spread widens noticeably, indicating greater variability or uncertainty in estimates. Model predictions can exceed 45 units, while USDA estimates stay lower.
Systematic Discrepancy: The consistent divergence at higher iron levels suggests differences in the methodologies or assumptions underlying USDA and model estimates, particularly for iron-rich foods.
These patterns highlight the need to examine how iron content is estimated, especially for foods with high iron levels where the disagreement is most pronounced.
def usda_prediction_gaps2(mask, title):
y_model2 = y_model[mask]
y_usda2 = y_usda[mask]
# Calculate differences and get sorted indices
differences = np.abs(y_model2 - y_usda2)
sorted_indices = np.argsort(differences)
# Sort all arrays based on differences
y_model_sorted = y_model2[sorted_indices]
y_usda_sorted = y_usda2[sorted_indices]
x_sorted = np.arange(len(y_model2))
plt.figure(figsize=(15, 3))
plt.plot(x_sorted, y_model_sorted, 'o', color='#FF6B6B', label='Model Prediction', markersize=2)
plt.plot(x_sorted, y_usda_sorted, 'o', color='#4ECDC4', label='USDA Imputed', markersize=2)
# Add lines between corresponding points
for i in range(len(x_sorted)):
plt.plot([x_sorted[i], x_sorted[i]], [y_model_sorted[i], y_usda_sorted[i]], '-', color='gray', alpha=0.7, linewidth=1)
plt.grid(True, linestyle='--', alpha=0.7)
# plt.xlabel('100g Food Sample (sorted by USDA vs. Top Model Gap)', fontsize=12)
plt.xlabel(title, fontsize=12)
plt.ylabel('Iron Content (mg)', fontsize=12)
# plt.yscale('log')
# plt.title(title, fontsize=14, pad=20)
# plt.title('USDA vs Top Model Predictions', fontsize=14, pad=20)
plt.legend(fontsize=10, loc='upper left')
# Add a light background color
plt.gca().set_facecolor('#f8f9fa')
plt.grid(True, linestyle='--', alpha=0.7)
plt.tight_layout()
plt.show()
# for food_group in estimated_iron_food_rows['food_group'].unique():
# usda_prediction_gaps2(
# estimated_iron_food_rows['food_group'].isin([food_group, '7', '8', '9'])
# ,f"{food_group} Food Sample (sorted by size of USDA vs Model Gap)")
# # ,f"USDA vs Model Predictions for {food_group}")
# Get the top 10 food groups by count
top_n_groups = (estimated_iron_food_rows['food_group']
.value_counts()
.head(10)
.index
.tolist())
# Run the analysis for each of the top 10 food groups
for food_group in top_n_groups:
usda_prediction_gaps2(
estimated_iron_food_rows['food_group'].isin([food_group, '7', '8', '9'])
,f"{food_group} Food Sample (sorted by size of USDA vs Model Gap)")










The food group breakdowns reveal distinct patterns in how model predictions differ from USDA imputations:
Breakfast Cereals: The largest divergence occurs here, with model predictions consistently higher, especially for iron-rich varieties. This suggests USDA may underestimate fortified food values.
Beef Products: Agreement is strong, with minor variations around the 2–3 mg range, reflecting reliable methods for natural iron content estimation.
Baby Foods and Beverages: Minimal disagreement exists at lower iron levels, but divergence grows at higher levels, likely due to uncertainty in fortification estimates.
Dairy/Eggs, Fats/Oils, and Baked Products: These groups show close alignment between model predictions and USDA values, apart from occasional outliers.
Sweets: Divergence increases with iron content, though overall levels remain low compared to other groups.
These trends suggest either our prediction mode or the USDA methods need review for fortified foods and items with high iron content.
Model vs Imputation: Notable Foods#
def gap_foods():
large_gap_foods = 50
# y_model = y_model
# y_usda = y_usda
# Calculate absolute differences on the original scale
differences = np.abs(y_model - y_usda)
sorted_indices = np.argsort(differences)[::-1] # Reverse to get largest differences first
# Create a dataframe with both transformed and untransformed predictions
comparison_df = pd.DataFrame({
'Food': estimated_iron_food_rows.index[sorted_indices],
# 'Model_Prediction_Log': y_model[sorted_indices],
# 'USDA_Estimate_Log': y_usda[sorted_indices],
'Model_Prediction_mg': y_model[sorted_indices],
'USDA_Estimate_mg': y_usda[sorted_indices],
'Absolute_Difference_mg': differences[sorted_indices]
})
# Get food names for the top differences
food_names = [estimated_iron_food_rows.loc[food]['food_name']
for food in comparison_df['Food'].head(large_gap_foods)]
# Create a clean dataframe with just the columns we want
results_df = pd.DataFrame({
'Food_Name': food_names,
'Model_Prediction_mg': comparison_df['Model_Prediction_mg'].head(large_gap_foods).round(2),
'USDA_Estimate_mg': comparison_df['USDA_Estimate_mg'].head(large_gap_foods).round(2),
'Absolute_Difference_mg': comparison_df['Absolute_Difference_mg'].head(large_gap_foods).round(2)
})
# Display the dataframe
print("\nFoods wth Large USDA vs Top Model Prediction Gaps\n")
print(results_df.to_string(index=False))
gap_foods()
Foods wth Large USDA vs Top Model Prediction Gaps
Food_Name Model_Prediction_mg USDA_Estimate_mg Absolute_Difference_mg
Cereals, QUAKER, Quick Oats with Iron, Dry 4.6200 49.4500 44.8300
Babyfood, cereal, brown rice, dry, instant 4.5200 47.6000 43.0800
Cereals, MALT-O-MEAL, chocolate, dry 5.4700 42.8800 37.4100
Cereals, QUAKER, Instant Grits, Redeye Gravy & Country Ham flavor, dry 6.8600 42.0300 35.1700
Cocoa, dry powder, unsweetened, HERSHEY'S European Style Cocoa 1.1200 36.0000 34.8800
Cereals, MALT-O-MEAL, Farina Hot Wheat Cereal, dry 6.6800 40.9300 34.2500
Cereals ready-to-eat, MALT-O-MEAL, OAT BLENDERS with honey & almonds 7.6300 41.3800 33.7500
Babyfood, cereal, barley, dry fortified 15.3400 48.2100 32.8700
Cereals, QUAKER, Instant Grits, Ham 'n' Cheese flavor, dry 6.3300 38.2000 31.8700
Babyfood, cereal, oatmeal, with bananas, dry 16.2500 45.0000 28.7500
Cereals ready-to-eat, POST, HONEY BUNCHES OF OATS, with almonds 6.0900 33.8000 27.7100
Cereals ready-to-eat, RALSTON CRISP RICE 5.0500 32.7300 27.6800
Cereals ready-to-eat, MALT-O-MEAL, COLOSSAL CRUNCH 3.6300 30.0000 26.3700
Cereals ready-to-eat, MALT-O-MEAL, Blueberry MUFFIN TOPS Cereal 8.4400 34.2900 25.8500
Cereals, QUAKER, corn grits, instant, cheddar cheese flavor, dry 4.5500 30.3800 25.8300
Cereals ready-to-eat, QUAKER WHOLE HEARTS oat cereal 6.4600 32.1100 25.6500
Cereals, MALT-O-MEAL, original, plain, dry 5.5400 30.8600 25.3200
Goose, liver, raw 5.3600 30.5300 25.1700
Cereals ready-to-eat, MALT-O-MEAL, BERRY COLOSSAL CRUNCH 4.9300 30.0000 25.0700
Cereals ready-to-eat, MALT-O-MEAL, Honey BUZZERS 5.9700 31.0300 25.0600
Cereals, CREAM OF RICE, dry 3.4900 28.4400 24.9500
Cereals, QUAKER, Instant Grits Product with American Cheese Flavor, dry 6.4100 30.3800 23.9700
Cereals, MALT-O-MEAL, Maple & Brown Sugar Hot Wheat Cereal, dry 7.6200 31.2900 23.6700
Cereals, QUAKER, Instant Grits, Country Bacon flavor, dry 6.8400 30.3800 23.5400
Cereals ready-to-eat, QUAKER, KING VITAMAN 5.5300 29.0000 23.4700
Cereals ready-to-eat, POST, GRAPE-NUTS Cereal 4.5900 28.0000 23.4100
Cereals ready-to-eat, POST, ALPHA-BITS 6.9800 30.0000 23.0200
Cereals ready-to-eat, MALT-O-MEAL, CINNAMON TOASTERS 6.9900 30.0000 23.0100
Cereals ready-to-eat, POST, Honey Nut Shredded Wheat 5.0200 28.0000 22.9800
Cereals, QUAKER, Instant Grits, Butter flavor, dry 7.0900 30.0000 22.9100
Cereals ready-to-eat, POST HONEY BUNCHES OF OATS with cinnamon bunches 5.5400 28.0000 22.4600
Cereals ready-to-eat, POST, GRAPE-NUTS Flakes 5.4700 27.9000 22.4300
Cereals ready-to-eat, RALSTON Crispy Hexagons 5.5400 27.9300 22.3900
Cereals ready-to-eat, QUAKER, QUAKER CRUNCHY BRAN 8.7900 30.9800 22.1900
Cereals ready-to-eat, MALT-O-MEAL, Frosted Mini SPOONERS 7.3400 29.4500 22.1100
Cereals ready-to-eat, MALT-O-MEAL, TOOTIE FRUITIES 6.0700 28.1200 22.0500
Cereals ready-to-eat, MALT-O-MEAL, HONEY GRAHAM SQUARES 11.9300 33.8100 21.8800
Cereals ready-to-eat, POST, HONEY BUNCHES OF OATS with vanilla bunches 7.2000 28.9000 21.7000
Cereals ready-to-eat, MALT-O-MEAL, COCO-ROOS 8.4000 30.0000 21.6000
Cereals, ready-to-eat, MALT-O-MEAL, Blueberry Mini SPOONERS 8.1500 29.4500 21.3000
Cereals ready-to-eat, POST GREAT GRAINS Banana Nut Crunch 6.8800 28.0000 21.1200
Cereals ready-to-eat, MALT-O-MEAL, Crispy Rice 6.5400 27.2700 20.7300
Cereals ready-to-eat, MALT-O-MEAL, Honey Nut SCOOTERS 9.3800 30.0000 20.6200
Cereals ready-to-eat, POST, HONEY BUNCHES OF OATS, with real strawberries 5.7500 26.1000 20.3500
Babyfood, cereal, mixed, dry fortified 9.9300 30.0000 20.0700
Cereals ready-to-eat, MALT-O-MEAL, OAT BLENDERS with honey 6.9500 27.0000 20.0500
Cereals ready-to-eat, QUAKER Oatmeal Squares, Golden Maple 9.3100 29.0000 19.6900
Cereals ready-to-eat, MALT-O-MEAL, MARSHMALLOW MATEYS 10.5400 30.0000 19.4600
Cereals ready-to-eat, POST Bran Flakes 8.6400 28.0000 19.3600
Babyfood, Multigrain whole grain cereal, dry fortified 10.8200 30.0000 19.1800
The detailed data reveals significant discrepancies between model predictions and USDA estimates, particularly for fortified foods:
Fortified Cereals and Baby Foods: These categories show the largest gaps, often exceeding 20–40 mg. Quick Oats with Iron leads with a 44.8 mg difference.
Specific Products:
Fortified breakfast cereals (MALT-O-MEAL, QUAKER, POST brands)
Baby food cereals (notably rice and barley-based)
Instant grits products
consistently show lower model predictions than USDA estimates.
Outlier: Hershey’s European Style Cocoa Powder has a 34.9 mg gap, highlighting potential issues in estimating iron for concentrated dry powders.
These discrepancies suggest that USDA methods may need closer examination for products with artificially added iron, as opposed to naturally occurring iron.
CONCLUSIONS#
Model Performance#
Random Forest achieved the best results (RMSE 0.2653), slightly outperforming XGBoost (RMSE 0.2755).
Linear models like Linear Regression and Elastic Net performed poorly (RMSE 0.3649), likely due to non-linear nutrient interactions.
Neural Networks showed moderate success (RMSE 0.2993) but fell short of tree-based models despite their potential for handling complex patterns.
Notable Insights#
Tree-based models outperformed Neural Networks, which was unexpected given the latter’s flexibility in learning complex relationships.
The poor performance of linear models suggests the data is highly non-linear.
Prediction accuracy varied significantly across food categories, with R² ranging from -0.59 (poor) to 0.89 (excellent).
USDA Imputation Analysis#
Significant gaps were observed between model predictions and USDA estimates, especially for fortified foods:
Breakfast cereals and baby foods showed the largest discrepancies, often exceeding 40mg of iron.
Predictions for naturally occurring iron content were more consistent with USDA values.
Study Limitations#
Poor prediction for certain categories like Beverages and Breakfast Cereals.
Limited data for some food groups reduced reliability.
Difficulty in accurately modeling extreme iron values in fortified products.
Next Steps#
Investigate systematic differences in model predictions versus USDA estimates for fortified foods.
Collect additional data for underrepresented food groups to improve reliability.
Develop specialized models for natural vs. fortified iron content.
Explore ensemble methods that combine the strengths of different models.
Validate predictions through laboratory measurements to refine models and improve accuracy.