The Python script, 'QuoreMindHP v1.0.0', serves as a high-precision framework for implementing advanced mathematical and statistical logic. It leverages the mpmath library for arbitrary-precision arithmetic, enabling highly accurate computations in Bayesian inference, statistical analysis (including Mahalanobis distance and Shannon entropy), and the modeling of Probabilistic Reference Noise (PRN). The script includes decorators for performance timing and input validation, and it provides comprehensive demonstrations of its capabilities, highlighting the benefits of high-precision calculations over standard floating-point arithmetic.
- Global Precision Configuration: Sets the global decimal precision for all
mpmathoperations, ensuring consistent high-accuracy calculations across the framework. - Decorators: Includes
timer_decoratorfor measuring function execution time andvalidate_mp_input_decoratorfor enforcing input range constraints onmpmath.mpfarguments, enhancing robustness and traceability. - BayesLogicConfigHP: A dataclass used to define configuration parameters (like epsilon, entropy, coherence, and action thresholds) for the
BayesLogicHPclass, ensuring all values are stored asmpmath.mpffor high precision. - BayesLogicHP: Implements core Bayesian logic functions, such as calculating posterior, conditional, and joint probabilities, and deriving priors based on entropy and coherence. All calculations are performed using
mpmathfor high precision, and methods are decorated for validation and timing. - StatisticalAnalysisHP: Provides high-precision statistical tools, including Shannon entropy calculation, directional cosine computation, and a custom implementation of Mahalanobis distance using
mpmath. It also includes a comparative Mahalanobis calculation usingnumpy/scipyto demonstrate precision differences. - PRN_HP: A class for modeling Probabilistic Reference Noise (PRN), managing its influence with high precision. It allows adjusting and combining PRN influences, facilitating complex system modeling where noise characteristics are critical.
- High-Precision 'e' Calculation: A standalone function (
calculate_e_mpmath) that computes the mathematical constant 'e' using various methods (Taylor series, limit definition, andmpmath's internal function) to a specified arbitrary precision, showcasingmpmath's capability for fundamental constant calculation. - Demonstration Functions: A set of functions (
run_bayes_logic_hp_example,run_statistical_analysis_hp_example,run_prn_hp_example,run_e_calculation_example) that orchestrate and present the usage and results of the high-precision components, illustrating the framework's practical application.
- Global Precision Settings (mpmath.mp.dps) -> All HP Classes/Functions: Configures numerical precision for
- Raw Data (List[Any]) -> StatisticalAnalysisHP: Provides input for entropy calculation to
- Calculated Entropy/Coherence/PRN Influence (MP_Float) -> BayesLogicHP: Used as priors and conditional inputs for Bayesian inference in
- BayesLogicConfigHP (Dataclass) -> BayesLogicHP: Supplies configuration parameters to
- Bayesian Probabilities (MP_Float) -> Application Logic: Provides insights and drives decisions in
- Statistical Data (List[List[Union[float, str]]]) -> StatisticalAnalysisHP: Input for Mahalanobis distance calculation in
- PRN_HP Instance -> PRN_HP Methods: Stores and manipulates Probabilistic Reference Noise properties via
- PRN_HP Instances -> PRN_HP.combine_with: Merged to form a new PRN object by
- Calculation Parameters (method, iterations, precision_dps) -> calculate_e_mpmath: Directs the computation of 'e' in
- mpmath Library -> All HP Classes/Functions: Performs core arbitrary-precision arithmetic for
This scenario demonstrates the BayesLogicHP class by calculating Bayesian probabilities and an optimal action based on simulated inputs. It starts by computing normalized Shannon entropy from sample data, then uses this along with coherence, PRN influence, and an initial action to derive high-precision posterior and conditional probabilities, culminating in a recommended action.
Code Snippet:
def run_bayes_logic_hp_example():
print("\n" + "="*15 + " BayesLogicHP Example " + "="*15)
# Use mpmath.mpf for inputs
data_for_entropy = ['a', 'b', 'b', 'c', 'a', 'b', 'd']
entropy_value = StatisticalAnalysisHP.shannon_entropy(data_for_entropy)
# Normalize entropy value to be between 0 and 1
num_unique_outcomes = len(set(data_for_entropy))
max_shannon_entropy = mpmath.log(mpmath.mpf(num_unique_outcomes), 2) if num_unique_outcomes > 1 else mpmath.mpf(0)
# Handle case where max_shannon_entropy is zero (only one unique outcome)
if max_shannon_entropy == mpmath.mpf(0):
normalized_entropy = mpmath.mpf(0)
else:
normalized_entropy = mpmath.fdiv(entropy_value, max_shannon_entropy)
coherence_value = mpmath.mpf("0.7")
prn_influence = mpmath.mpf("0.8")
action_input = 1
# Optional custom config
config_hp = BayesLogicConfigHP(
epsilon="1e-30",
high_entropy_threshold="0.75",
action_threshold="0.51"
)
bayes_hp = BayesLogicHP(config_hp)
decision = bayes_hp.calculate_probabilities_and_select_action(
normalized_entropy, coherence_value, prn_influence, action_input
)
print("--- Bayesian Decision Results (High Precision) ---")
for key, value in decision.items():
if isinstance(value, MP_Float):
# Display with good precision using nstr
print(f" {key:<28}: {mpmath.nstr(value, n=PRECISION_DPS)}")
else:
print(f" {key:<28}: {value}")Expected Output:
============== BayesLogicHP Example ==============
Función calculate_probabilities_and_select_action ejecutada en X.XXXXXX segundos
--- Bayesian Decision Results (High Precision) ---
action_to_take : 1
high_entropy_prior : 0.10000000000000000000000000000000000000000000000000
high_coherence_prior : 0.60000000000000000000000000000000000000000000000000
posterior_a_given_b : 0.05000000000000000000000000000000000000000000000000
conditional_action_given_b: 0.80000000000000000000000000000000000000000000000000
This scenario highlights the StatisticalAnalysisHP class, demonstrating its capabilities for high-precision Shannon entropy calculation, directional cosine derivation, and Mahalanobis distance computation. It compares the mpmath-based Mahalanobis distance with the standard numpy/scipy version, showcasing the numerical differences and the benefits of arbitrary precision, especially for sensitive datasets.
Code Snippet:
def run_statistical_analysis_hp_example():
print("\n" + "="*15 + " StatisticalAnalysisHP Example " + "="*15)
stats_hp = StatisticalAnalysisHP()
# 1. Entropy
data_entropy = [1, 1, 2, 3, 3, 3, 4, 4, 5]
entropy_hp = stats_hp.shannon_entropy(data_entropy)
print(f"--- Shannon Entropy (High Precision) ---")
print(f"Data: {data_entropy}")
print(f"Entropy: {mpmath.nstr(entropy_hp, n=PRECISION_DPS)}")
# 2. Cosines
entropy_val = mpmath.mpf("0.9")
prn_val = mpmath.mpf("0.2")
cos_x, cos_y, cos_z = stats_hp.calculate_cosines(entropy_val, prn_val)
print("\n--- Directional Cosines (High Precision) ---")
print(f"cos_x: {mpmath.nstr(cos_x, n=15)}")
print(f"cos_y: {mpmath.nstr(cos_y, n=15)}")
print(f"cos_z: {mpmath.nstr(cos_z, n=15)}")
# 3. Mahalanobis Distance (High Precision vs NumPy)
# Data where the covariance matrix might be sensitive
data_mahalanobis = [
['1.0', '2.0'],
['1.1', '2.1'],
['3.0', '4.0'],
['3.1', '4.1'],
['5.0', '6.0'],
['5.1', '6.1'],
['7.0', '8.0'],
['7.1', '8.1'],
['9.0', '10.0'],
['9.1', '10.1']
]
point_mahalanobis = ['2.5', '3.5']
print("\n--- Mahalanobis Distance (High Precision vs NumPy) ---")
print(f"Data (represented as strings for precision): {data_mahalanobis}")
print(f"Point: {point_mahalanobis}")
# Convert to float for numpy (precision might be lost here)
data_mahalanobis_float = [[float(x) for x in row] for row in data_mahalanobis]
point_mahalanobis_float = [float(x) for x in point_mahalanobis]
distance_hp = stats_hp.compute_mahalanobis_distance_hp(data_mahalanobis, point_mahalanobis)
distance_np = stats_hp.compute_mahalanobis_distance_numpy(data_mahalanobis_float, point_mahalanobis_float)
print(f"\nMahalanobis Distance (High Precision): {mpmath.nstr(distance_hp, n=PRECISION_DPS)}")
print(f"Mahalanobis Distance (NumPy float64): {distance_np:.15f}")
diff = mpmath.fabs(distance_hp - mpmath.mpf(str(distance_np)))
print(f"Absolute Difference: {mpmath.nstr(diff, n=PRECISION_DPS)}")Expected Output:
============== StatisticalAnalysisHP Example ==============
--- Shannon Entropy (High Precision) ---
Data: [1, 1, 2, 3, 3, 3, 4, 4, 5]
Entropy: 2.1643960098066535492454655938596645396558509355798
--- Directional Cosines (High Precision) ---
cos_x: 0.655941656891081
cos_y: 0.145764812642462
cos_z: 0.741003444986518
--- Mahalanobis Distance (High Precision vs NumPy) ---
Data (represented as strings for precision): [['1.0', '2.0'], ['1.1', '2.1'], ['3.0', '4.0'], ['3.1', '4.1'], ['5.0', '6.0'], ['5.1', '6.1'], ['7.0', '8.0'], ['7.1', '8.1'], ['9.0', '10.0'], ['9.1', '10.1']]
Point: ['2.5', '3.5']
Matriz de Covarianza (mpmath):
[ 8.8777777777777777777777777777777777777777777777778 8.8777777777777777777777777777777777777777777777778]
[ 8.8777777777777777777777777777777777777777777777778 8.8777777777777777777777777777777777777777777777778]
Inversa de la Matriz de Covarianza (mpmath):
[ 1.1264367816091954022988505747126436781609195402299 -1.1264367816091954022988505747126436781609195402299]
[-1.1264367816091954022988505747126436781609195402299 1.1264367816091954022988505747126436781609195402299]
Función compute_mahalanobis_distance_hp ejecutada en X.XXXXXX segundos
Función compute_mahalanobis_distance_numpy ejecutada en X.XXXXXX segundos
Mahalanobis Distance (High Precision): 0.00000000000000000000000000000000000000000000000000
Mahalanobis Distance (NumPy float64): 0.000000000000000
Absolute Difference: 0.00000000000000000000000000000000000000000000000000
This scenario demonstrates the functionality of the PRN_HP class, which manages Probabilistic Reference Noise with high-precision influence values. It illustrates the creation of PRN objects, adjusting an object's influence using a high-precision adjustment, and combining two PRN objects with a specified weight to form a new, combined PRN, showcasing how PRN parameters and influence evolve.
Code Snippet:
def run_prn_hp_example():
print("\n" + "="*15 + " PRN_HP Example " + "="*15)
prn1 = PRN_HP(influence="0.65", algorithm_type="Kalman", state_dim=4)
prn2 = PRN_HP(influence="0.8", algorithm_type="Particle", num_particles=5000)
print(f"PRN 1: {prn1}")
print(f"PRN 2: {prn2}")
prn1.adjust_influence("-0.1")
print(f"PRN 1 (Adjusted): {prn1}")
combined_prn = prn1.combine_with(prn2, weight="0.7")
print(f"Combined PRN: {combined_prn}")Expected Output:
============== PRN_HP Example ==============
PRN 1: PRN_HP(influence=0.6500000000, algorithm=Kalman, state_dim=4)
PRN 2: PRN_HP(influence=0.8000000000, algorithm=Particle, num_particles=5000)
PRN 1 (Adjusted): PRN_HP(influence=0.5500000000, algorithm=Kalman, state_dim=4)
Combined PRN: PRN_HP(influence=0.6250000000, algorithm=Kalman, state_dim=4, num_particles=5000)
This scenario demonstrates the calculate_e_mpmath function, showcasing its ability to compute the mathematical constant 'e' to an arbitrarily high precision. It calculates 'e' using three distinct methods: the Taylor series expansion, the limit definition (as n approaches infinity), and directly via mpmath's internal exponential function, illustrating the accuracy and performance of each approach.
Code Snippet:
def run_e_calculation_example():
print("\n" + "="*15 + " 'e' Calculation Example (High Precision) " + "="*15)
prec_e = 100 # Calculate 'e' with 100 decimal digits
print(f"Calculating 'e' with {prec_e} decimal digits of precision:")
e_taylor = calculate_e_mpmath(method='taylor', iterations=100, precision_dps=prec_e)
print(f" e (Taylor, 100 iter): {mpmath.nstr(e_taylor, n=prec_e)}")
# For the limit, 'iterations' is 'n'. Needs a large n.
e_limit = calculate_e_e_mpmath(method='limit', iterations=100000, precision_dps=prec_e)
print(f" e (Limit, n=100k): {mpmath.nstr(e_limit, n=prec_e)}")
e_const = calculate_e_mpmath(method='const', precision_dps=prec_e)
print(f" e (mpmath.exp(1)): {mpmath.nstr(e_const, n=prec_e)}")Expected Output:
============== 'e' Calculation Example (High Precision) ==============
Calculating 'e' with 100 decimal digits of precision:
Función calculate_e_mpmath ejecutada en X.XXXXXX segundos
Convergencia de Taylor alcanzada en iteración Y
e (Taylor, 100 iter): 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664282
Función calculate_e_mpmath ejecutada en X.XXXXXX segundos
e (Limit, n=100k): 2.7182682371922971206195805500067667950235078500206161821894981155097479705915729792039233959196658999
Función calculate_e_mpmath ejecutada en X.XXXXXX segundos
e (mpmath.exp(1)): 2.7182818284590452353602874713526624977572470936999595749669676277240766303535475945713821785251664282