What is the nature of your input file, right now? Is it “an image,” as in “RGB+A?” Or are the various rendering work-products stored distinctly (as they might be, say, in an OpenEXR file?).
If everything has been reduced to “an image,” then there’s really not much that (I know of that) you can do: vital information is irretrievably gone. Everything has been mashed together into pixels, but you no longer have any way to know why those pixel values are there.
What you need to have, in your original source-data going into your pipeline, is that “everything is separate.” For instance, you might have color, specular, alpha, normal i[/i] channels just for “the ball.” Then, separately(!), channels just for the density of shadows that are being cast by that ball. (Perhaps a separate channel for each light.)
A rendering sequence has a lot of separate channels of data flowing through it that must have been produced separately, and that have remained separate throughout. They are distinguished, not by their (RGBA) characteristics, but rather by their purpose. “This is shadow #X of object #Y,” and so on. The very last steps are “to combine them.”
If your original render files included that information with sufficient separation of data, then you are fine. If not, you will have to re-render.