Skip to content

Logits Overflow Problem #3

@mohitsharma29

Description

@mohitsharma29

Hey, thanks for releasing your code. I installed the pip version of your code and am trying to determine the fairness-accuracy tradeoff on my dataset.

  1. Here is the error I am getting. Could you help us figure out what this issue is?

RuntimeError Traceback (most recent call last)
Cell In[17], line 162
159 for lam in np.arange(10, 1000, 10):
160 fermi_instance = FERMI(train_embeddings, test_embeddings, train_soft.reshape(-1, 1), test_soft.reshape(-1, 1),
161 S_train=np.eye(2)[train_attrs], S_test=np.eye(2)[test_attrs], batch_size=64, epochs=2000, lam=lam)
--> 162 all_weights[lam] = FERMI_Logistic_Regression(fermi_instance)

Cell In[17], line 142
140 loss_min = criterion(logits, YTorch)
141 else:
--> 142 loss_min = criterion(logits, YTorch) + fermi.fairness_regularizer(XTorch, STorch)
143 # loss_min = criterion(logits, YTorch)
145 minimizer.zero_grad()

File ~/.conda/envs/llms/lib/python3.11/site-packages/torch/nn/modules/module.py:1739, in Module._wrapped_call_impl(self, *args, **kwargs)
1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1738 else:
-> 1739 return self._call_impl(*args, **kwargs)

File ~/.conda/envs/llms/lib/python3.11/site-packages/torch/nn/modules/module.py:1750, in Module._call_impl(self, *args, **kwargs)
1745 # If we don't have any hooks, we want to skip the rest of the logic in
1746 # this function, and just call forward.
1747 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1748 or _global_backward_pre_hooks or _global_backward_hooks
1749 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1750 return forward_call(*args, **kwargs)
1752 result = None
1753 called_always_called_hooks = set()

File ~/.conda/envs/llms/lib/python3.11/site-packages/torch/nn/modules/loss.py:699, in BCELoss.forward(self, input, target)
698 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 699 return F.binary_cross_entropy(
700 input, target, weight=self.weight, reduction=self.reduction
701 )

File ~/.conda/envs/llms/lib/python3.11/site-packages/torch/nn/functional.py:3569, in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
3566 new_size = _infer_size(target.size(), weight.size())
3567 weight = weight.expand(new_size)
-> 3569 return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)

RuntimeError: all elements of input should be between 0 and 1

  1. The code library seems to only support DP. Where can I find extensions for EOP/EODDS?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions