Bug in creation of CompositeXYProjectors in DefaultDatasetView

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Bug in creation of CompositeXYProjectors in DefaultDatasetView

Lee Kamentsky
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee

_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Aivar Grislis
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally you have a single A associated with a combined RGB?  So averaging the three alphas might make sense here, because I think they should all be the same value.
In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some defined layer order.  For a given pixel say the current output pixel value is ARGB1 and you are compositing a second image with value ARGB2 on top of it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not involved.

In other words, if you add a layer that is completely opaque you no longer have to consider any of the colors or alpha values underneath it.


I think the bigger issue here is this code is specifically designed to composite red, green and blue image layers.  It's a special case since for a given pixel the red comes from the red layer, blue from blue layer, and green from green layer.  These layers shouldn't be completely opaque, since the colors wouldn't combine at all then or completely transparent since then they wouldn't contribute any color.  I don't think transparency is useful here.

It's also possible that a multichannel image with > 3 channels is being displayed with more color channels, namely cyan, magenta, and yellow.  The code here is designed to stop overflow, but I'm not convinced those extended color channels would combine meaningfully.

Aivar

In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer. 

This brings up some interesting questions:

1) If the first, bottom-most layer is transparent, what color should show through?  Black, white?  Or perhaps it's best to ignore this base layer transparency.

2) If you wanted to composite several transparent images, how do you calculate the transparency of the composite?  I'm not sure this is something we need to do.

Aivar


On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Lee Kamentsky
Thanks for answering Aivar,

I think what your reply did for me is to have me take a step back and consider what we're modeling. If you look at my replies below, I think that the best solution is to use a model where the background is white and each successive layer filters out some of that background, like a gel. A layer attenuates the underlying layer by a fraction of (1 - alpha/255 * (1 - red/255)), resulting in no attenuation for 255 and attenuation of alpha/255 for zero. We can then use a red converter that returns a value of 255 for the blue and green channels and the model and math work correctly.

On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <[hidden email]> wrote:
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally you have a single A associated with a combined RGB?  So averaging the three alphas might make sense here, because I think they should all be the same value.
I think you're right, the model always is that each pixel has an alpha value that applies to R, G and B. The image I was using was the Clown example image. DefaultDatasetView.initializeView constructs three RealLUTConverters for the projector, one for red, one for green and one for blue which sends you down this rabbit hole.
In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some defined layer order.  For a given pixel say the current output pixel value is ARGB1 and you are compositing a second image with value ARGB2 on top of it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not involved.
I think that's a valid interpretation. I've always used (alpha(ARGB1) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / (alpha(ARGB1) + alpha(ARGB2)) because I assumed the alpha indicated the
strength of the blending of each source. In any case, the code as it stands doesn't do either of these.

In other words, if you add a layer that is completely opaque you no longer have to consider any of the colors or alpha values underneath it. 

I think the bigger issue here is this code is specifically designed to composite red, green and blue image layers.  It's a special case since for a given pixel the red comes from the red layer, blue from blue layer, and green from green layer.  These layers shouldn't be completely opaque, since the colors wouldn't combine at all then or completely transparent since then they wouldn't contribute any color.  I don't think transparency is useful here.
So this is an argument for blending instead of layering - transparency would be useful if the images were blended and treated as if on a par with each other, allowing the user to emphasize one channel or the other. 

It's also possible that a multichannel image with > 3 channels is being displayed with more color channels, namely cyan, magenta, and yellow.  The code here is designed to stop overflow, but I'm not convinced those extended color channels would combine meaningfully.

Aivar

In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer. 

This brings up some interesting questions:

1) If the first, bottom-most layer is transparent, what color should show through?  Black, white?  Or perhaps it's best to ignore this base layer transparency.
Maybe the model should be that the background is white and successive layers are like gel filters on top. In that case, you'd have:
red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1) 

And maybe that points to what the true solution is. For the default, we could change things so that red channel would have blue = 255 and green = 255 and the first composition would change only the red channel.

2) If you wanted to composite several transparent images, how do you calculate the transparency of the composite?  I'm not sure this is something we need to do.

Aivar


On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel



_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Aivar Grislis
I think CompositeXYProjector is meant to handle the following cases:

1) Rendering LUT images, a single converter is used.  Grayscale images are included here.

2) Rendering RGB images, three converters are used.  These use red-only, green-only, and blue-only LUTs.

3) I believe it's also intended to work with images with > 3 channels, using C, M, and Y for the excess channels.

The existing code works well for cases 1 & 2.  Case 3 adds the possibility of overflow, if your red converter gives you a value of 255 for the red component but your magenta converter adds another 255.  Currently the code just limits the value to 255 in that case.  Some sort of blending might work better here, but the bigger issue is RGBCMY is not an additive color system.  If you see a cyan blotch you don't know if its in both the G & B channels or just the C channel.

Aivar


On 7/15/13 2:40 PM, Lee Kamentsky wrote:
Thanks for answering Aivar,

I think what your reply did for me is to have me take a step back and consider what we're modeling. If you look at my replies below, I think that the best solution is to use a model where the background is white and each successive layer filters out some of that background, like a gel. A layer attenuates the underlying layer by a fraction of (1 - alpha/255 * (1 - red/255)), resulting in no attenuation for 255 and attenuation of alpha/255 for zero. We can then use a red converter that returns a value of 255 for the blue and green channels and the model and math work correctly.

On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <[hidden email]> wrote:
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally you have a single A associated with a combined RGB?  So averaging the three alphas might make sense here, because I think they should all be the same value.
I think you're right, the model always is that each pixel has an alpha value that applies to R, G and B. The image I was using was the Clown example image. DefaultDatasetView.initializeView constructs three RealLUTConverters for the projector, one for red, one for green and one for blue which sends you down this rabbit hole.
In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some defined layer order.  For a given pixel say the current output pixel value is ARGB1 and you are compositing a second image with value ARGB2 on top of it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not involved.
I think that's a valid interpretation. I've always used (alpha(ARGB1) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / (alpha(ARGB1) + alpha(ARGB2)) because I assumed the alpha indicated the
strength of the blending of each source. In any case, the code as it stands doesn't do either of these.

In other words, if you add a layer that is completely opaque you no longer have to consider any of the colors or alpha values underneath it. 

I think the bigger issue here is this code is specifically designed to composite red, green and blue image layers.  It's a special case since for a given pixel the red comes from the red layer, blue from blue layer, and green from green layer.  These layers shouldn't be completely opaque, since the colors wouldn't combine at all then or completely transparent since then they wouldn't contribute any color.  I don't think transparency is useful here.
So this is an argument for blending instead of layering - transparency would be useful if the images were blended and treated as if on a par with each other, allowing the user to emphasize one channel or the other. 

It's also possible that a multichannel image with > 3 channels is being displayed with more color channels, namely cyan, magenta, and yellow.  The code here is designed to stop overflow, but I'm not convinced those extended color channels would combine meaningfully.

Aivar

In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer. 

This brings up some interesting questions:

1) If the first, bottom-most layer is transparent, what color should show through?  Black, white?  Or perhaps it's best to ignore this base layer transparency.
Maybe the model should be that the background is white and successive layers are like gel filters on top. In that case, you'd have:
red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1) 

And maybe that points to what the true solution is. For the default, we could change things so that red channel would have blue = 255 and green = 255 and the first composition would change only the red channel.

2) If you wanted to composite several transparent images, how do you calculate the transparency of the composite?  I'm not sure this is something we need to do.

Aivar


On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel




_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Curtis Rueden
Hi all,

> the bigger issue is RGBCMY is not an additive color system.

I believe ImageJ1 treats it as additive. Look at the sample "Organ of Corti" -- the current behavior of ImageJ2 causes that sample to appear the same as it does in IJ1. Before we added the bounds-checking code, it erroneously wrapped pixel values.

As for the alpha stuff, I will try to digest and reply soon but I am way too tired at this moment. I just wanted to clarify why the code is the way it is. It was intended to be more general than only the cases Aivar mentioned, and instead provided additive support for *any* color table per channel you throw at it, the same as ImageJ1's CompositeImages do.

Regards,
Curtis


On Mon, Jul 15, 2013 at 3:46 PM, Aivar Grislis <[hidden email]> wrote:
I think CompositeXYProjector is meant to handle the following cases:

1) Rendering LUT images, a single converter is used.  Grayscale images are included here.

2) Rendering RGB images, three converters are used.  These use red-only, green-only, and blue-only LUTs.

3) I believe it's also intended to work with images with > 3 channels, using C, M, and Y for the excess channels.

The existing code works well for cases 1 & 2.  Case 3 adds the possibility of overflow, if your red converter gives you a value of 255 for the red component but your magenta converter adds another 255.  Currently the code just limits the value to 255 in that case.  Some sort of blending might work better here, but the bigger issue is RGBCMY is not an additive color system.  If you see a cyan blotch you don't know if its in both the G & B channels or just the C channel.

Aivar



On 7/15/13 2:40 PM, Lee Kamentsky wrote:
Thanks for answering Aivar,

I think what your reply did for me is to have me take a step back and consider what we're modeling. If you look at my replies below, I think that the best solution is to use a model where the background is white and each successive layer filters out some of that background, like a gel. A layer attenuates the underlying layer by a fraction of (1 - alpha/255 * (1 - red/255)), resulting in no attenuation for 255 and attenuation of alpha/255 for zero. We can then use a red converter that returns a value of 255 for the blue and green channels and the model and math work correctly.

On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <[hidden email]> wrote:
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally you have a single A associated with a combined RGB?  So averaging the three alphas might make sense here, because I think they should all be the same value.
I think you're right, the model always is that each pixel has an alpha value that applies to R, G and B. The image I was using was the Clown example image. DefaultDatasetView.initializeView constructs three RealLUTConverters for the projector, one for red, one for green and one for blue which sends you down this rabbit hole.
In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some defined layer order.  For a given pixel say the current output pixel value is ARGB1 and you are compositing a second image with value ARGB2 on top of it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not involved.
I think that's a valid interpretation. I've always used (alpha(ARGB1) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / (alpha(ARGB1) + alpha(ARGB2)) because I assumed the alpha indicated the
strength of the blending of each source. In any case, the code as it stands doesn't do either of these.

In other words, if you add a layer that is completely opaque you no longer have to consider any of the colors or alpha values underneath it. 

I think the bigger issue here is this code is specifically designed to composite red, green and blue image layers.  It's a special case since for a given pixel the red comes from the red layer, blue from blue layer, and green from green layer.  These layers shouldn't be completely opaque, since the colors wouldn't combine at all then or completely transparent since then they wouldn't contribute any color.  I don't think transparency is useful here.
So this is an argument for blending instead of layering - transparency would be useful if the images were blended and treated as if on a par with each other, allowing the user to emphasize one channel or the other. 

It's also possible that a multichannel image with > 3 channels is being displayed with more color channels, namely cyan, magenta, and yellow.  The code here is designed to stop overflow, but I'm not convinced those extended color channels would combine meaningfully.

Aivar

In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer. 

This brings up some interesting questions:

1) If the first, bottom-most layer is transparent, what color should show through?  Black, white?  Or perhaps it's best to ignore this base layer transparency.
Maybe the model should be that the background is white and successive layers are like gel filters on top. In that case, you'd have:
red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1) 

And maybe that points to what the true solution is. For the default, we could change things so that red channel would have blue = 255 and green = 255 and the first composition would change only the red channel.

2) If you wanted to composite several transparent images, how do you calculate the transparency of the composite?  I'm not sure this is something we need to do.

Aivar


On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel




_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel



_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Aivar Grislis
I believe ImageJ1 treats it [RGBCMY] as additive. Look at the sample "Organ of Corti" -- the current behavior of ImageJ2 causes that sample to appear the same as it does in IJ1. Before we added the bounds-checking code, it erroneously wrapped pixel values.
By not being additive I meant C is a secondary color composed of primaries G & B, etc.  In the sense of http://en.wikipedia.org/wiki/Additive_color .

Okay, "Organ of Corti" uses RGBK (and K is even worse than my example of C since it has all three RGB components not just G & B) and yet it works as an image.  It's useful because the areas lit up in each channel are fairly distinct.  If these areas overlapped the bounds-checking code would come into play in the overlapping pixels and some highlights would get squashed and some colors distorted (when one component is squashed but not the others).  But even if the code did a better job of combining the colors of overlapping areas you'd still have visual ambiguity in these areas (since eyes can't distinguish C from G + B).  So now I'm thinking the code works well as is.
It was intended to be more general than only the cases Aivar mentioned, and instead provided additive support for *any* color table per channel you throw at it, the same as ImageJ1's CompositeImages do.
Sure, it shouldn't crash and burn if you put Fire on one channel and Ice on another but that's not usable visually unless the areas lit up in each channel are distinct.  If you have a lot of overlap and you want the colors to add up meaningfully you're better off sticking with primary additive colors for your channel LUTs.

On 7/15/13 3:53 PM, Curtis Rueden wrote:
Hi all,

> the bigger issue is RGBCMY is not an additive color system.

I believe ImageJ1 treats it as additive. Look at the sample "Organ of Corti" -- the current behavior of ImageJ2 causes that sample to appear the same as it does in IJ1. Before we added the bounds-checking code, it erroneously wrapped pixel values.

As for the alpha stuff, I will try to digest and reply soon but I am way too tired at this moment. I just wanted to clarify why the code is the way it is. It was intended to be more general than only the cases Aivar mentioned, and instead provided additive support for *any* color table per channel you throw at it, the same as ImageJ1's CompositeImages do.

Regards,
Curtis


On Mon, Jul 15, 2013 at 3:46 PM, Aivar Grislis <[hidden email]> wrote:
I think CompositeXYProjector is meant to handle the following cases:

1) Rendering LUT images, a single converter is used.  Grayscale images are included here.

2) Rendering RGB images, three converters are used.  These use red-only, green-only, and blue-only LUTs.

3) I believe it's also intended to work with images with > 3 channels, using C, M, and Y for the excess channels.

The existing code works well for cases 1 & 2.  Case 3 adds the possibility of overflow, if your red converter gives you a value of 255 for the red component but your magenta converter adds another 255.  Currently the code just limits the value to 255 in that case.  Some sort of blending might work better here, but the bigger issue is RGBCMY is not an additive color system.  If you see a cyan blotch you don't know if its in both the G & B channels or just the C channel.

Aivar



On 7/15/13 2:40 PM, Lee Kamentsky wrote:
Thanks for answering Aivar,

I think what your reply did for me is to have me take a step back and consider what we're modeling. If you look at my replies below, I think that the best solution is to use a model where the background is white and each successive layer filters out some of that background, like a gel. A layer attenuates the underlying layer by a fraction of (1 - alpha/255 * (1 - red/255)), resulting in no attenuation for 255 and attenuation of alpha/255 for zero. We can then use a red converter that returns a value of 255 for the blue and green channels and the model and math work correctly.

On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <[hidden email]> wrote:
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally you have a single A associated with a combined RGB?  So averaging the three alphas might make sense here, because I think they should all be the same value.
I think you're right, the model always is that each pixel has an alpha value that applies to R, G and B. The image I was using was the Clown example image. DefaultDatasetView.initializeView constructs three RealLUTConverters for the projector, one for red, one for green and one for blue which sends you down this rabbit hole.
In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some defined layer order.  For a given pixel say the current output pixel value is ARGB1 and you are compositing a second image with value ARGB2 on top of it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not involved.
I think that's a valid interpretation. I've always used (alpha(ARGB1) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / (alpha(ARGB1) + alpha(ARGB2)) because I assumed the alpha indicated the
strength of the blending of each source. In any case, the code as it stands doesn't do either of these.

In other words, if you add a layer that is completely opaque you no longer have to consider any of the colors or alpha values underneath it. 

I think the bigger issue here is this code is specifically designed to composite red, green and blue image layers.  It's a special case since for a given pixel the red comes from the red layer, blue from blue layer, and green from green layer.  These layers shouldn't be completely opaque, since the colors wouldn't combine at all then or completely transparent since then they wouldn't contribute any color.  I don't think transparency is useful here.
So this is an argument for blending instead of layering - transparency would be useful if the images were blended and treated as if on a par with each other, allowing the user to emphasize one channel or the other. 

It's also possible that a multichannel image with > 3 channels is being displayed with more color channels, namely cyan, magenta, and yellow.  The code here is designed to stop overflow, but I'm not convinced those extended color channels would combine meaningfully.

Aivar

In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer. 

This brings up some interesting questions:

1) If the first, bottom-most layer is transparent, what color should show through?  Black, white?  Or perhaps it's best to ignore this base layer transparency.
Maybe the model should be that the background is white and successive layers are like gel filters on top. In that case, you'd have:
red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1) 

And maybe that points to what the true solution is. For the default, we could change things so that red channel would have blue = 255 and green = 255 and the first composition would change only the red channel.

2) If you wanted to composite several transparent images, how do you calculate the transparency of the composite?  I'm not sure this is something we need to do.

Aivar


On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel




_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel




_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Tobias Pietzsch
Hi all,

My suspicion is that there is no One True Solution™. So from my point of view it would be nice to have a way to support different options.

The recent projectors pull request https://github.com/imagej/imglib/pull/23 by Michael Zinsmaier (KNIME) has potential to provide this extensibility.
Their DimProjector2D which is a possible replacement for net.imglib2.display.CompositeXYProjector uses a  final Converter< ProjectedDimSampler< A >, B > to convert from a set of A-values in the "composite dimension" to the output B-value. There could be different converters for different alpha-compositing algorithms and it would be easy to add new options for imglib2 users.

The projectors branch / pull request requires some work to make it a replacement for the current projectors instead of opening up a parallel hierarchy. If someone wants to work on the compositing issues I think that would be a good place to direct efforts to.

best regards,
Tobias


On Jul 16, 2013, at 12:28 AM, Aivar Grislis <[hidden email]> wrote:

I believe ImageJ1 treats it [RGBCMY] as additive. Look at the sample "Organ of Corti" -- the current behavior of ImageJ2 causes that sample to appear the same as it does in IJ1. Before we added the bounds-checking code, it erroneously wrapped pixel values.
By not being additive I meant C is a secondary color composed of primaries G & B, etc.  In the sense of http://en.wikipedia.org/wiki/Additive_color .

Okay, "Organ of Corti" uses RGBK (and K is even worse than my example of C since it has all three RGB components not just G & B) and yet it works as an image.  It's useful because the areas lit up in each channel are fairly distinct.  If these areas overlapped the bounds-checking code would come into play in the overlapping pixels and some highlights would get squashed and some colors distorted (when one component is squashed but not the others).  But even if the code did a better job of combining the colors of overlapping areas you'd still have visual ambiguity in these areas (since eyes can't distinguish C from G + B).  So now I'm thinking the code works well as is.
It was intended to be more general than only the cases Aivar mentioned, and instead provided additive support for *any* color table per channel you throw at it, the same as ImageJ1's CompositeImages do.
Sure, it shouldn't crash and burn if you put Fire on one channel and Ice on another but that's not usable visually unless the areas lit up in each channel are distinct.  If you have a lot of overlap and you want the colors to add up meaningfully you're better off sticking with primary additive colors for your channel LUTs.

On 7/15/13 3:53 PM, Curtis Rueden wrote:
Hi all,

> the bigger issue is RGBCMY is not an additive color system.

I believe ImageJ1 treats it as additive. Look at the sample "Organ of Corti" -- the current behavior of ImageJ2 causes that sample to appear the same as it does in IJ1. Before we added the bounds-checking code, it erroneously wrapped pixel values.

As for the alpha stuff, I will try to digest and reply soon but I am way too tired at this moment. I just wanted to clarify why the code is the way it is. It was intended to be more general than only the cases Aivar mentioned, and instead provided additive support for *any* color table per channel you throw at it, the same as ImageJ1's CompositeImages do.

Regards,
Curtis


On Mon, Jul 15, 2013 at 3:46 PM, Aivar Grislis <[hidden email]> wrote:
I think CompositeXYProjector is meant to handle the following cases:

1) Rendering LUT images, a single converter is used.  Grayscale images are included here.

2) Rendering RGB images, three converters are used.  These use red-only, green-only, and blue-only LUTs.

3) I believe it's also intended to work with images with > 3 channels, using C, M, and Y for the excess channels.

The existing code works well for cases 1 & 2.  Case 3 adds the possibility of overflow, if your red converter gives you a value of 255 for the red component but your magenta converter adds another 255.  Currently the code just limits the value to 255 in that case.  Some sort of blending might work better here, but the bigger issue is RGBCMY is not an additive color system.  If you see a cyan blotch you don't know if its in both the G & B channels or just the C channel.

Aivar



On 7/15/13 2:40 PM, Lee Kamentsky wrote:
Thanks for answering Aivar,

I think what your reply did for me is to have me take a step back and consider what we're modeling. If you look at my replies below, I think that the best solution is to use a model where the background is white and each successive layer filters out some of that background, like a gel. A layer attenuates the underlying layer by a fraction of (1 - alpha/255 * (1 - red/255)), resulting in no attenuation for 255 and attenuation of alpha/255 for zero. We can then use a red converter that returns a value of 255 for the blue and green channels and the model and math work correctly.

On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <[hidden email]> wrote:
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally you have a single A associated with a combined RGB?  So averaging the three alphas might make sense here, because I think they should all be the same value.
I think you're right, the model always is that each pixel has an alpha value that applies to R, G and B. The image I was using was the Clown example image. DefaultDatasetView.initializeView constructs three RealLUTConverters for the projector, one for red, one for green and one for blue which sends you down this rabbit hole.
In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some defined layer order.  For a given pixel say the current output pixel value is ARGB1 and you are compositing a second image with value ARGB2 on top of it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not involved.
I think that's a valid interpretation. I've always used (alpha(ARGB1) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / (alpha(ARGB1) + alpha(ARGB2)) because I assumed the alpha indicated the
strength of the blending of each source. In any case, the code as it stands doesn't do either of these.

In other words, if you add a layer that is completely opaque you no longer have to consider any of the colors or alpha values underneath it. 

I think the bigger issue here is this code is specifically designed to composite red, green and blue image layers.  It's a special case since for a given pixel the red comes from the red layer, blue from blue layer, and green from green layer.  These layers shouldn't be completely opaque, since the colors wouldn't combine at all then or completely transparent since then they wouldn't contribute any color.  I don't think transparency is useful here.
So this is an argument for blending instead of layering - transparency would be useful if the images were blended and treated as if on a par with each other, allowing the user to emphasize one channel or the other. 

It's also possible that a multichannel image with > 3 channels is being displayed with more color channels, namely cyan, magenta, and yellow.  The code here is designed to stop overflow, but I'm not convinced those extended color channels would combine meaningfully.

Aivar

In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer. 

This brings up some interesting questions:

1) If the first, bottom-most layer is transparent, what color should show through?  Black, white?  Or perhaps it's best to ignore this base layer transparency.
Maybe the model should be that the background is white and successive layers are like gel filters on top. In that case, you'd have:
red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1) 

And maybe that points to what the true solution is. For the default, we could change things so that red channel would have blue = 255 and green = 255 and the first composition would change only the red channel.

2) If you wanted to composite several transparent images, how do you calculate the transparency of the composite?  I'm not sure this is something we need to do.

Aivar


On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel




_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel



_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Lee Kamentsky
In reply to this post by Aivar Grislis
OK - it's up to you all. If it's doing what you think it should, I'm fine with it.


On Mon, Jul 15, 2013 at 6:28 PM, Aivar Grislis <[hidden email]> wrote:
I believe ImageJ1 treats it [RGBCMY] as additive. Look at the sample "Organ of Corti" -- the current behavior of ImageJ2 causes that sample to appear the same as it does in IJ1. Before we added the bounds-checking code, it erroneously wrapped pixel values.
By not being additive I meant C is a secondary color composed of primaries G & B, etc.  In the sense of http://en.wikipedia.org/wiki/Additive_color .

Okay, "Organ of Corti" uses RGBK (and K is even worse than my example of C since it has all three RGB components not just G & B) and yet it works as an image.  It's useful because the areas lit up in each channel are fairly distinct.  If these areas overlapped the bounds-checking code would come into play in the overlapping pixels and some highlights would get squashed and some colors distorted (when one component is squashed but not the others).  But even if the code did a better job of combining the colors of overlapping areas you'd still have visual ambiguity in these areas (since eyes can't distinguish C from G + B).  So now I'm thinking the code works well as is.
It was intended to be more general than only the cases Aivar mentioned, and instead provided additive support for *any* color table per channel you throw at it, the same as ImageJ1's CompositeImages do.
Sure, it shouldn't crash and burn if you put Fire on one channel and Ice on another but that's not usable visually unless the areas lit up in each channel are distinct.  If you have a lot of overlap and you want the colors to add up meaningfully you're better off sticking with primary additive colors for your channel LUTs.

On 7/15/13 3:53 PM, Curtis Rueden wrote:
Hi all,

> the bigger issue is RGBCMY is not an additive color system.

I believe ImageJ1 treats it as additive. Look at the sample "Organ of Corti" -- the current behavior of ImageJ2 causes that sample to appear the same as it does in IJ1. Before we added the bounds-checking code, it erroneously wrapped pixel values.

As for the alpha stuff, I will try to digest and reply soon but I am way too tired at this moment. I just wanted to clarify why the code is the way it is. It was intended to be more general than only the cases Aivar mentioned, and instead provided additive support for *any* color table per channel you throw at it, the same as ImageJ1's CompositeImages do.

Regards,
Curtis


On Mon, Jul 15, 2013 at 3:46 PM, Aivar Grislis <[hidden email]> wrote:
I think CompositeXYProjector is meant to handle the following cases:

1) Rendering LUT images, a single converter is used.  Grayscale images are included here.

2) Rendering RGB images, three converters are used.  These use red-only, green-only, and blue-only LUTs.

3) I believe it's also intended to work with images with > 3 channels, using C, M, and Y for the excess channels.

The existing code works well for cases 1 & 2.  Case 3 adds the possibility of overflow, if your red converter gives you a value of 255 for the red component but your magenta converter adds another 255.  Currently the code just limits the value to 255 in that case.  Some sort of blending might work better here, but the bigger issue is RGBCMY is not an additive color system.  If you see a cyan blotch you don't know if its in both the G & B channels or just the C channel.

Aivar



On 7/15/13 2:40 PM, Lee Kamentsky wrote:
Thanks for answering Aivar,

I think what your reply did for me is to have me take a step back and consider what we're modeling. If you look at my replies below, I think that the best solution is to use a model where the background is white and each successive layer filters out some of that background, like a gel. A layer attenuates the underlying layer by a fraction of (1 - alpha/255 * (1 - red/255)), resulting in no attenuation for 255 and attenuation of alpha/255 for zero. We can then use a red converter that returns a value of 255 for the blue and green channels and the model and math work correctly.

On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <[hidden email]> wrote:
I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3.
Isn't it unusual to define an alpha for each color component, generally you have a single A associated with a combined RGB?  So averaging the three alphas might make sense here, because I think they should all be the same value.
I think you're right, the model always is that each pixel has an alpha value that applies to R, G and B. The image I was using was the Clown example image. DefaultDatasetView.initializeView constructs three RealLUTConverters for the projector, one for red, one for green and one for blue which sends you down this rabbit hole.
In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is more cumulative, done layer by layer in some defined layer order.  For a given pixel say the current output pixel value is ARGB1 and you are compositing a second image with value ARGB2 on top of it:  For the red channel the output color should be ((255 - alpha(ARGB2)) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of ARGB1 is not involved.
I think that's a valid interpretation. I've always used (alpha(ARGB1) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / (alpha(ARGB1) + alpha(ARGB2)) because I assumed the alpha indicated the
strength of the blending of each source. In any case, the code as it stands doesn't do either of these.

In other words, if you add a layer that is completely opaque you no longer have to consider any of the colors or alpha values underneath it. 

I think the bigger issue here is this code is specifically designed to composite red, green and blue image layers.  It's a special case since for a given pixel the red comes from the red layer, blue from blue layer, and green from green layer.  These layers shouldn't be completely opaque, since the colors wouldn't combine at all then or completely transparent since then they wouldn't contribute any color.  I don't think transparency is useful here.
So this is an argument for blending instead of layering - transparency would be useful if the images were blended and treated as if on a par with each other, allowing the user to emphasize one channel or the other. 

It's also possible that a multichannel image with > 3 channels is being displayed with more color channels, namely cyan, magenta, and yellow.  The code here is designed to stop overflow, but I'm not convinced those extended color channels would combine meaningfully.

Aivar

In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.
I think alpha processing is cumulative layer by layer. 

This brings up some interesting questions:

1) If the first, bottom-most layer is transparent, what color should show through?  Black, white?  Or perhaps it's best to ignore this base layer transparency.
Maybe the model should be that the background is white and successive layers are like gel filters on top. In that case, you'd have:
red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1) 

And maybe that points to what the true solution is. For the default, we could change things so that red channel would have blue = 255 and green = 255 and the first composition would change only the red channel.

2) If you wanted to composite several transparent images, how do you calculate the transparency of the composite?  I'm not sure this is something we need to do.

Aivar


On 7/15/13 10:31 AM, Lee Kamentsky wrote:
Hi all, 
I'm looking at the code for net.imglib2.display.CompositeXYProjector and as I step through it, it's clear that the alpha calculation isn't being handled correctly. Here's the code as it stands now, line 190 roughly:

for ( int i = 0; i < size; i++ )
{
sourceRandomAccess.setPosition( currentPositions[ i ], dimIndex );
currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
// accumulate converted result
final int value = bi.get();
final int a = ARGBType.alpha( value );
final int r = ARGBType.red( value );
final int g = ARGBType.green( value );
final int b = ARGBType.blue( value );
aSum += a;
rSum += r;
gSum += g;
bSum += b;
}
if ( aSum > 255 )
aSum = 255;
if ( rSum > 255 )
rSum = 255;
if ( gSum > 255 )
gSum = 255;
if ( bSum > 255 )
bSum = 255;
targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum, aSum ) );

I have an ImgPlus backed by an RGB PlanarImg of UnsignedByteType and ARGBType.alpha(value) is 255 for all of them, so aSum is 765. It would appear that the correct solution would be to divide aSum by 3. In addition, there's no scaling of the individual red, green and blue values by their channel's alpha. If the input were two index-color images, each of which had different alphas, the code should multiply the r, g and b values by the alphas before summing and then divide by the total alpha in the end. The alpha in this case *should* be the sum of alphas divided by the number of channels.

However, I think the problem is deeper than that. For an RGB ImgPlus, there are three LUTs and each of them has an alpha of 255, but that alpha only applies to one of the colors in the LUT. When you're compositing images and weighing them equally, if two are black and one is white, then the result is 1/3 of the white intensity - if you translate that to red, green and blue images, the resulting intensity will be 1/3 of that desired. This might sound weird, but the only solution that works out mathematically is for the defaultLUTs in the DefaultDatasetView to use color tables that return values that are 3x those of ColorTables.RED, GREEN and BLUE. Thinking about it, I'm afraid this *is* the correct model and each channel really is 3x brighter than possible.

It took me quite a bit of back and forth to come up with the above... I hope you all understand what I'm saying and understand the problem and counter-intuitive solution and have the patience to follow it. Dscho, if you made it this far - you're the mathematician, what's your take?

--Lee


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel




_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel





_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel
Reply | Threaded
Open this post in threaded view
|

Re: Bug in creation of CompositeXYProjectors in DefaultDatasetView

Stephan Saalfeld
In reply to this post by Aivar Grislis
Hi,

ImageJ1's composition is exclusively additive, and it ignores alpha
values.  I think, in the current discussion, we're mixing additive color
composition with alpha blending.  So let's try to disentangle:

Let's have 4 arbitrary RGB colors c1, c2, c3, c4.  Then, additive
composition means that

c = c1 + c2 + c3 + c4

, that's the ImageJ1 model.

If we add alpha to it we do not actually mean alpha but a weight
(normalized by 255, so 0 -> 0 and 255 -> 1.0), then

c = a1 * c1 + a2 * c2 + a3 * c3 + a4 * c4

I consider this a useful extension, but it remains unclear what to do
with the resulting alpha value.  My guess is that it does not have much
meaning and so adding it up and cropping or just setting it to 1.0
(0xff) is both fine.

The other operation is alpha blending which is the composition series:

c12 = a2 * c2 + (1 - a2) * a1 * c1,  a12 = a2 + (1 - a2) * a1
c123 = a3 * c3 + (1 - a3) * a12 * c12,  a123 = a3 + (1 - a3) * a12
c1234 = a4 * c4 + (1 - a4) * a123 * c123,  a1234 = a4 + (1 - a4) * a123

Additive composition makes little sense on top of anything but 0, alpha
composition is possible on arbitrary c0.

There are many other forms of composition possible, e.g. filtering
(multiplication), color/hue/intensity mixing, absolute or signed
differences, ...  it therefore seems a bit overstating to name either of
them THE Composition ;).

CompositeXYProjector implements the non-weighted addition as in ImageJ1
treating alpha as a fourth channel.


Speaking about projectors:  I would prefer if we had only one
2D-projector.  It is not necessary to choose which dimensions to treat
as x,y as there is Views.permute(RA[I], int, int).  To project axes
other than 0,1, we could have factory methods like
Projectors.project(RAI, int x, int y) that first permute the axes and
then project 0,1.  That has the benefit that there is no index lookup if
you really project x,y.

I also think that we do not need CompositeProjectors any longer.  I have
recently added view.composite.Composite etc. and the method
Views.collapse as a construct to achieve the desired behavior in a more
general way.  The trailing dimension of an RA[I] can be collapsed into
an (n-1)-dimensional RA[I] of Composites.  Composites can then be
converted with Converters and (Converters.convert) to get the desired
composition which then may be x,y-projected.

Of course there are many ways to achieve the same result.  I just like
this one most as I have the impression that it implements the minimum
required operations in the most generic/extensible fashion.  E.g. adding
new compositions as discussed above means writing a new Converter, no
clutter.  Please don't hesitate to prove me wrong.  E.g., I didn't
consider potential performance penalties much yet.

I'll try to find time tomorrow to make a few benchmarks.

Best,
Stephan



On Mon, 2013-07-15 at 17:28 -0500, Aivar Grislis wrote:

> > I believe ImageJ1 treats it [RGBCMY] as additive. Look at the sample
> > "Organ of Corti" -- the current behavior of ImageJ2 causes that sample
> > to appear the same as it does in IJ1. Before we added the
> > bounds-checking code, it erroneously wrapped pixel values.
> By not being additive I meant C is a secondary color composed of
> primaries G & B, etc.  In the sense of
> http://en.wikipedia.org/wiki/Additive_color .
>
> Okay, "Organ of Corti" uses RGBK (and K is even worse than my example of
> C since it has all three RGB components not just G & B) and yet it works
> as an image.  It's useful because the areas lit up in each channel are
> fairly distinct.  If these areas overlapped the bounds-checking code
> would come into play in the overlapping pixels and some highlights would
> get squashed and some colors distorted (when one component is squashed
> but not the others).  But even if the code did a better job of combining
> the colors of overlapping areas you'd still have visual ambiguity in
> these areas (since eyes can't distinguish C from G + B).  So now I'm
> thinking the code works well as is.
> > It was intended to be more general than only the cases Aivar
> > mentioned, and instead provided additive support for *any* color table
> > per channel you throw at it, the same as ImageJ1's CompositeImages do.
> Sure, it shouldn't crash and burn if you put Fire on one channel and Ice
> on another but that's not usable visually unless the areas lit up in
> each channel are distinct.  If you have a lot of overlap and you want
> the colors to add up meaningfully you're better off sticking with
> primary additive colors for your channel LUTs.
>
> On 7/15/13 3:53 PM, Curtis Rueden wrote:
> > Hi all,
> >
> > > the bigger issue is RGBCMY is not an additive color system.
> >
> > I believe ImageJ1 treats it as additive. Look at the sample "Organ of
> > Corti" -- the current behavior of ImageJ2 causes that sample to appear
> > the same as it does in IJ1. Before we added the bounds-checking code,
> > it erroneously wrapped pixel values.
> >
> > As for the alpha stuff, I will try to digest and reply soon but I am
> > way too tired at this moment. I just wanted to clarify why the code is
> > the way it is. It was intended to be more general than only the cases
> > Aivar mentioned, and instead provided additive support for *any* color
> > table per channel you throw at it, the same as ImageJ1's
> > CompositeImages do.
> >
> > Regards,
> > Curtis
> >
> >
> > On Mon, Jul 15, 2013 at 3:46 PM, Aivar Grislis <[hidden email]
> > <mailto:[hidden email]>> wrote:
> >
> >     I think CompositeXYProjector is meant to handle the following cases:
> >
> >     1) Rendering LUT images, a single converter is used. Grayscale
> >     images are included here.
> >
> >     2) Rendering RGB images, three converters are used. These use
> >     red-only, green-only, and blue-only LUTs.
> >
> >     3) I believe it's also intended to work with images with > 3
> >     channels, using C, M, and Y for the excess channels.
> >
> >     The existing code works well for cases 1 & 2.  Case 3 adds the
> >     possibility of overflow, if your red converter gives you a value
> >     of 255 for the red component but your magenta converter adds
> >     another 255.  Currently the code just limits the value to 255 in
> >     that case. Some sort of blending might work better here, but the
> >     bigger issue is RGBCMY is not an additive color system. If you see
> >     a cyan blotch you don't know if its in both the G & B channels or
> >     just the C channel.
> >
> >     Aivar
> >
> >
> >
> >     On 7/15/13 2:40 PM, Lee Kamentsky wrote:
> >>     Thanks for answering Aivar,
> >>
> >>     I think what your reply did for me is to have me take a step back
> >>     and consider what we're modeling. If you look at my replies
> >>     below, I think that the best solution is to use a model where the
> >>     background is white and each successive layer filters out some of
> >>     that background, like a gel. A layer attenuates the underlying
> >>     layer by a fraction of (1 - alpha/255 * (1 - red/255)), resulting
> >>     in no attenuation for 255 and attenuation of alpha/255 for zero.
> >>     We can then use a red converter that returns a value of 255 for
> >>     the blue and green channels and the model and math work correctly.
> >>
> >>     On Mon, Jul 15, 2013 at 1:59 PM, Aivar Grislis <[hidden email]
> >>     <mailto:[hidden email]>> wrote:
> >>
> >>>         I have an ImgPlus backed by an RGB PlanarImg of
> >>>         UnsignedByteType and ARGBType.alpha(value) is 255 for all of
> >>>         them, so aSum is 765. It would appear that the correct
> >>>         solution would be to divide aSum by 3.
> >>         Isn't it unusual to define an alpha for each color component,
> >>         generally you have a single A associated with a combined
> >>         RGB?  So averaging the three alphas might make sense here,
> >>         because I think they should all be the same value.
> >>
> >>     I think you're right, the model always is that each pixel has an
> >>     alpha value that applies to R, G and B. The image I was using was
> >>     the Clown example image. DefaultDatasetView.initializeView
> >>     constructs three RealLUTConverters for the projector, one for
> >>     red, one for green and one for blue which sends you down this
> >>     rabbit hole.
> >>
> >>>         In addition, there's no scaling of the individual red, green
> >>>         and blue values by their channel's alpha. If the input were
> >>>         two index-color images, each of which had different alphas,
> >>>         the code should multiply the r, g and b values by the alphas
> >>>         before summing and then divide by the total alpha in the
> >>>         end. The alpha in this case *should* be the sum of alphas
> >>>         divided by the number of channels.
> >>         I think alpha processing is more cumulative, done layer by
> >>         layer in some defined layer order.  For a given pixel say the
> >>         current output pixel value is ARGB1 and you are compositing a
> >>         second image with value ARGB2 on top of it:  For the red
> >>         channel the output color should be ((255 - alpha(ARGB2)) *
> >>         red(ARGB1) + alpha(ARGB2) * red(ARGB2)) / 255.  The alpha of
> >>         ARGB1 is not involved.
> >>
> >>     I think that's a valid interpretation. I've always used
> >>     (alpha(ARGB1) * red(ARGB1) + alpha(ARGB2) * red(ARGB2)) /
> >>     (alpha(ARGB1) + alpha(ARGB2)) because I assumed the alpha
> >>     indicated the
> >>     strength of the blending of each source. In any case, the code as
> >>     it stands doesn't do either of these.
> >>
> >>
> >>         In other words, if you add a layer that is completely opaque
> >>         you no longer have to consider any of the colors or alpha
> >>         values underneath it.
> >>
> >>
> >>         I think the bigger issue here is this code is specifically
> >>         designed to composite red, green and blue image layers.  It's
> >>         a special case since for a given pixel the red comes from the
> >>         red layer, blue from blue layer, and green from green layer.
> >>         These layers shouldn't be completely opaque, since the colors
> >>         wouldn't combine at all then or completely transparent since
> >>         then they wouldn't contribute any color.  I don't think
> >>         transparency is useful here.
> >>
> >>     So this is an argument for blending instead of layering -
> >>     transparency would be useful if the images were blended and
> >>     treated as if on a par with each other, allowing the user to
> >>     emphasize one channel or the other.
> >>
> >>
> >>         It's also possible that a multichannel image with > 3
> >>         channels is being displayed with more color channels, namely
> >>         cyan, magenta, and yellow.  The code here is designed to stop
> >>         overflow, but I'm not convinced those extended color channels
> >>         would combine meaningfully.
> >>
> >>         Aivar
> >>
> >>>         In addition, there's no scaling of the individual red, green
> >>>         and blue values by their channel's alpha. If the input were
> >>>         two index-color images, each of which had different alphas,
> >>>         the code should multiply the r, g and b values by the alphas
> >>>         before summing and then divide by the total alpha in the
> >>>         end. The alpha in this case *should* be the sum of alphas
> >>>         divided by the number of channels.
> >>         I think alpha processing is cumulative layer by layer.
> >>
> >>         This brings up some interesting questions:
> >>
> >>         1) If the first, bottom-most layer is transparent, what color
> >>         should show through?  Black, white?  Or perhaps it's best to
> >>         ignore this base layer transparency.
> >>
> >>     Maybe the model should be that the background is white and
> >>     successive layers are like gel filters on top. In that case,
> >>     you'd have:
> >>     red = (255 - alpha(ARGB2) *(255 - red(ARGB2))/255) * red(ARGB1)
> >>
> >>     And maybe that points to what the true solution is. For the
> >>     default, we could change things so that red channel would have
> >>     blue = 255 and green = 255 and the first composition would change
> >>     only the red channel.
> >>
> >>
> >>         2) If you wanted to composite several transparent images, how
> >>         do you calculate the transparency of the composite?  I'm not
> >>         sure this is something we need to do.
> >>
> >>         Aivar
> >>
> >>
> >>         On 7/15/13 10:31 AM, Lee Kamentsky wrote:
> >>>         Hi all,
> >>>         I'm looking at the code for
> >>>         net.imglib2.display.CompositeXYProjector and as I step
> >>>         through it, it's clear that the alpha calculation isn't
> >>>         being handled correctly. Here's the code as it stands now,
> >>>         line 190 roughly:
> >>>
> >>>         for ( int i = 0; i < size; i++ )
> >>>         {
> >>>         sourceRandomAccess.setPosition( currentPositions[ i ],
> >>>         dimIndex );
> >>>         currentConverters[ i ].convert( sourceRandomAccess.get(), bi );
> >>>         // accumulate converted result
> >>>         final int value = bi.get();
> >>>         final int a = ARGBType.alpha( value );
> >>>         final int r = ARGBType.red( value );
> >>>         final int g = ARGBType.green( value );
> >>>         final int b = ARGBType.blue( value );
> >>>         aSum += a;
> >>>         rSum += r;
> >>>         gSum += g;
> >>>         bSum += b;
> >>>         }
> >>>         if ( aSum > 255 )
> >>>         aSum = 255;
> >>>         if ( rSum > 255 )
> >>>         rSum = 255;
> >>>         if ( gSum > 255 )
> >>>         gSum = 255;
> >>>         if ( bSum > 255 )
> >>>         bSum = 255;
> >>>         targetCursor.get().set( ARGBType.rgba( rSum, gSum, bSum,
> >>>         aSum ) );
> >>>
> >>>         I have an ImgPlus backed by an RGB PlanarImg of
> >>>         UnsignedByteType and ARGBType.alpha(value) is 255 for all of
> >>>         them, so aSum is 765. It would appear that the correct
> >>>         solution would be to divide aSum by 3. In addition, there's
> >>>         no scaling of the individual red, green and blue values by
> >>>         their channel's alpha. If the input were two index-color
> >>>         images, each of which had different alphas, the code should
> >>>         multiply the r, g and b values by the alphas before summing
> >>>         and then divide by the total alpha in the end. The alpha in
> >>>         this case *should* be the sum of alphas divided by the
> >>>         number of channels.
> >>>
> >>>         However, I think the problem is deeper than that. For an RGB
> >>>         ImgPlus, there are three LUTs and each of them has an alpha
> >>>         of 255, but that alpha only applies to one of the colors in
> >>>         the LUT. When you're compositing images and weighing them
> >>>         equally, if two are black and one is white, then the result
> >>>         is 1/3 of the white intensity - if you translate that to
> >>>         red, green and blue images, the resulting intensity will be
> >>>         1/3 of that desired. This might sound weird, but the only
> >>>         solution that works out mathematically is for the
> >>>         defaultLUTs in the DefaultDatasetView to use color tables
> >>>         that return values that are 3x those of ColorTables.RED,
> >>>         GREEN and BLUE. Thinking about it, I'm afraid this *is* the
> >>>         correct model and each channel really is 3x brighter than
> >>>         possible.
> >>>
> >>>         It took me quite a bit of back and forth to come up with the
> >>>         above... I hope you all understand what I'm saying and
> >>>         understand the problem and counter-intuitive solution and
> >>>         have the patience to follow it. Dscho, if you made it this
> >>>         far - you're the mathematician, what's your take?
> >>>
> >>>         --Lee
> >>>
> >>>
> >>>         _______________________________________________
> >>>         ImageJ-devel mailing list
> >>>         [hidden email]  <mailto:[hidden email]>
> >>>         http://imagej.net/mailman/listinfo/imagej-devel
> >>
> >>
> >>         _______________________________________________
> >>         ImageJ-devel mailing list
> >>         [hidden email] <mailto:[hidden email]>
> >>         http://imagej.net/mailman/listinfo/imagej-devel
> >>
> >>
> >
> >
> >     _______________________________________________
> >     ImageJ-devel mailing list
> >     [hidden email] <mailto:[hidden email]>
> >     http://imagej.net/mailman/listinfo/imagej-devel
> >
> >
>
> _______________________________________________
> ImageJ-devel mailing list
> [hidden email]
> http://imagej.net/mailman/listinfo/imagej-devel


_______________________________________________
ImageJ-devel mailing list
[hidden email]
http://imagej.net/mailman/listinfo/imagej-devel