
    N jy                     4    S /r S/rS/rS/rS/rS/rS/rS/rg)	a  Your code may result in an error when running in eager. Please double check that your code doesn't contain a similar error when actually running eager/uncompiled. You can do this by removing the `torch.compile` call, or by using `torch.compiler.set_stance("force_eager")`. zEThis is likely to be a Dynamo bug. Please report an issue to PyTorch.z]This graph break may be difficult to debug. Please report an issue to PyTorch for assistance.zThis graph break is fundamental - it is unlikely that Dynamo will ever be able to trace through your code. Consider finding a workaround.zIt may be possible to write Dynamo tracing rules for this code. Please report an issue to PyTorch if you encounter this graph break often and it is causing performance issues.zxThis graph break may have been caused by an earlier graph break. Resolving the earlier graph break may resolve this one.a0  Avoid using `tensor.is_inference()` and `torch.is_inference_mode_enabled()` in your compile code. This is primarily used in conjunction with `torch.inference_mode`. Consider using `torch.no_grad` instead because `torch.no_grad` leads to same improvements as `inference_mode` when `torch.compile` is used.zSparse tensor operations are not yet fully supported in torch.compile with fullgraph=True. Consider using fullgraph=False to allow graph breaks, or move sparse tensor creation outside the compiled region.N)
USER_ERROR
DYNAMO_BUG	DIFFICULTFUNDAMENTALSUPPORTABLECAUSED_BY_EARLIER_GRAPH_BREAKINFERENCE_MODESPARSE_TENSOR     p/root/GenerationalWealth/GenerationalWealth/venv/lib/python3.13/site-packages/torch/_dynamo/graph_break_hints.py<module>r      sl   u
 L
 d	0
M
 ! k#r   