Skip to content

Getting Runtime Error in styleflow_sample.ipynb  #1

@ujjawalcse

Description

@ujjawalcse

Hey @Hramchenko ,
Thanks for this amazing repository.
I'm trying to execute styleflow_sample.ipynb on Google Colab. But I'm getting an error on this step,
fws = prior(latents[None, None, ...].to(device), attrs[None, ...].to(device),zero_padding)

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
[<ipython-input-29-e9cf7102f562>](https://localhost:8080/#) in <module>
----> 1 fws = prior(latents[None, None, ...].to(device), attrs[None, ...].to(device),zero_padding)

9 frames
[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1192         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1193                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194             return forward_call(*input, **kwargs)
   1195         # Do not call functions when jit is used
   1196         full_backward_hooks, non_full_backward_hooks = [], []

[/content/diffusion_editor/styleflow/cnf.py](https://localhost:8080/#) in forward(self, x, context, logpx, reverse, inds, integration_times)
     28         else:
     29             for i in inds:
---> 30                 x, logpx = self.chain[i](x, context, logpx, integration_times, reverse)
     31             return x, logpx
     32 

[/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1192         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1193                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194             return forward_call(*input, **kwargs)
   1195         # Do not call functions when jit is used
   1196         full_backward_hooks, non_full_backward_hooks = [], []

[/content/diffusion_editor/styleflow/cnf.py](https://localhost:8080/#) in forward(self, x, context, logpx, integration_times, reverse)
     97             )
     98         else:
---> 99             state_t = odeint(
    100                 self.odefunc,
    101                 states,

[/usr/local/lib/python3.8/dist-packages/torchdiffeq/_impl/adjoint.py](https://localhost:8080/#) in odeint_adjoint(func, y0, t, rtol, atol, method, options, event_fn, adjoint_rtol, adjoint_atol, adjoint_method, adjoint_options, adjoint_params)
    196     handle_adjoint_norm_(adjoint_options, shapes, state_norm)
    197 
--> 198     ans = OdeintAdjointMethod.apply(shapes, func, y0, t, rtol, atol, method, options, event_fn, adjoint_rtol, adjoint_atol,
    199                                     adjoint_method, adjoint_options, t.requires_grad, *adjoint_params)
    200 

[/usr/local/lib/python3.8/dist-packages/torchdiffeq/_impl/adjoint.py](https://localhost:8080/#) in forward(ctx, shapes, func, y0, t, rtol, atol, method, options, event_fn, adjoint_rtol, adjoint_atol, adjoint_method, adjoint_options, t_requires_grad, *adjoint_params)
     23 
     24         with torch.no_grad():
---> 25             ans = odeint(func, y0, t, rtol=rtol, atol=atol, method=method, options=options, event_fn=event_fn)
     26 
     27             if event_fn is None:

[/usr/local/lib/python3.8/dist-packages/torchdiffeq/_impl/odeint.py](https://localhost:8080/#) in odeint(func, y0, t, rtol, atol, method, options, event_fn)
     75 
     76     if event_fn is None:
---> 77         solution = solver.integrate(t)
     78     else:
     79         event_t, solution = solver.integrate_until_event(t[0], event_fn)

[/usr/local/lib/python3.8/dist-packages/torchdiffeq/_impl/solvers.py](https://localhost:8080/#) in integrate(self, t)
     26         solution[0] = self.y0
     27         t = t.to(self.dtype)
---> 28         self._before_integrate(t)
     29         for i in range(1, len(t)):
     30             solution[i] = self._advance(t[i])

[/usr/local/lib/python3.8/dist-packages/torchdiffeq/_impl/rk_common.py](https://localhost:8080/#) in _before_integrate(self, t)
    161         f0 = self.func(t[0], self.y0)
    162         if self.first_step is None:
--> 163             first_step = _select_initial_step(self.func, t[0], self.y0, self.order - 1, self.rtol, self.atol,
    164                                               self.norm, f0=f0)
    165         else:

[/usr/local/lib/python3.8/dist-packages/torchdiffeq/_impl/misc.py](https://localhost:8080/#) in _select_initial_step(func, t0, y0, order, rtol, atol, norm, f0)
     52 
     53     d0 = norm(y0 / scale)
---> 54     d1 = norm(f0 / scale)
     55 
     56     if d0 < 1e-5 or d1 < 1e-5:

RuntimeError: The size of tensor a (542) must match the size of tensor b (570) at non-singleton dimension 0 

When I'm running with just removing zero_padding parameter from this step, It's not raising any error.
fws = prior(latents[None, None, ...].to(device), attrs[None, ...].to(device))
Similarly removed thezero_paddingparameter from one another step and everything runs fine.
rev = prior(fws[0], new_attrs.to(device))[0]

But finally the result is not as expected.
face-editing-issue

Any idea where am i going wrong??

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions