You are hereFeed aggregator

Feed aggregator


Image binarization – new R2016a functions

Matlab Image processing blog - 2016, May 16 - 11:20

In my 09-May-2016 post, I described the Image Processing Toolbox functions im2bw and graythresh, which have been in the product for a long time. I also identified a few weaknesses in the functional designs:

  • The function im2bw uses a fixed threshold value (LEVEL) of 0.5 by default. Using graythresh to determine the threshold value automatically would be a more useful behavior most of the time.
  • If you don't need to save the value of LEVEL, then you end up calling the functions in a slightly awkward way, passing the input image to each of the two functions: bw = im2bw(I,graythresh(I))
  • Although Otsu's method really only needs to know the image histogram, you have to pass in the image itself to the graythresh function. This is awkward for some use cases, such as using the collective histogram of multiple images in a dataset to compute a single threshold.
  • Some users wanted to control the number of histogram bins used by graythresh, which does not have that as an option. (I forgot to mention this item in my previous post.)
  • There was no locally adaptive thresholding method in the toolbox.

For all of these reasons, the Image Processing Toolbox development undertook a redesign of binarization functionality for the R2016a release. The functional designs are different and the capabilities have been extended. We now encourage the use of a new family of functions:

Binarization using an automatically computed threshold value is now simpler. Instead of two function calls, im2bw(I,graythresh(I)), you can do it with one, imbinarize(I).

I = imread('cameraman.tif'); imshow(I) xlabel('Cameraman image courtesy of MIT') bw = imbinarize(I); imshowpair(I,bw,'montage')

In addition to global thresholding, imbinarize can also do locally adaptive thresholding. Here is an example using an image with a mild illumination gradient from top to bottom.

I = imread('rice.png'); bw = imbinarize(I); imshowpair(I,bw,'montage') title('Original and global threshold')

You can see that the rice grains at the bottom of the image are imperfectly segmented because they are in a darker portion of the image. Now switch to an adaptive threshold.

bw = imbinarize(I,'adaptive'); imshowpair(I,bw,'montage') title('Original and adaptive threshold')

Here is a more extreme example of nonuniform illumination.

I = imread('printedtext.png'); imshow(I) title('Original image') bw = imbinarize(I); imshow(bw) title('Global threshold')

Let's see how using an adaptive threshold can improve the results. Before jumping into it, though, notice that the foreground pixels in this image are darker than the background, which is the opposite of the rice grains image above. The adaptive method works better if it knows whether to look for foreground pixels that are brighter or darker than the background. The optional parameter 'ForegroundPolarity' lets is specify that.

bw = imbinarize(I,'adaptive','ForegroundPolarity','dark'); imshow(bw) title('Adaptive threshold')

The new functions otsuthresh and adaptthresh are for those who want to have more fine-grained control over the algorithms underlying the global and adaptive thresholding behavior of imbinarize. I'll talk about them next time.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

, I described the Image Processing Toolbox functions % |im2bw| and |graythresh|, which have been in the product for a long time. % I also identified a few weaknesses in the functional designs: % % * The function |im2bw| uses a fixed threshold value (|LEVEL|) of 0.5 by % default. Using |graythresh| to determine the threshold value automatically % would be a more useful behavior most of the time. % * If you don't need to save the value of LEVEL, then you end up calling % the functions in a slightly awkward way, passing the input image to each % of the two functions: bw = im2bw(I,graythresh(I)) % * Although Otsu's method really only needs to know the image histogram, % you have to pass in the image itself to the graythresh function. This is % awkward for some use cases, such as using the collective histogram of % multiple images in a dataset to compute a single threshold. % * Some users wanted to control the number of histogram bins used by % |graythresh|, which does not have that as an option. (I forgot to mention % this item in my previous post.) % * There was no locally adaptive thresholding method in the toolbox. % % For all of these reasons, the Image Processing Toolbox development % undertook a redesign of binarization functionality for the R2016a release. % The functional designs are different and the capabilities have been % extended. We now encourage the use of a new family of functions: % % * % * % * % % Binarization using an automatically computed threshold value is now % simpler. Instead of two function calls, |im2bw(I,graythresh(I))|, you can % do it with one, |imbinarize(I)|. I = imread('cameraman.tif'); imshow(I) xlabel('Cameraman image courtesy of MIT') %% bw = imbinarize(I); imshowpair(I,bw,'montage') %% % In addition to global thresholding, |imbinarize| can also do locally % adaptive thresholding. Here is an example using an image with a mild % illumination gradient from top to bottom. I = imread('rice.png'); bw = imbinarize(I); imshowpair(I,bw,'montage') title('Original and global threshold') %% % You can see that the rice grains at the bottom of the image are % imperfectly segmented because they are in a darker portion of the image. % Now switch to an adaptive threshold. bw = imbinarize(I,'adaptive'); imshowpair(I,bw,'montage') title('Original and adaptive threshold') %% % Here is a more extreme example of nonuniform illumination. I = imread('printedtext.png'); imshow(I) title('Original image') %% bw = imbinarize(I); imshow(bw) title('Global threshold') %% % Let's see how using an adaptive threshold can improve the results. Before % jumping into it, though, notice that the foreground pixels in this image % are darker than the background, which is the opposite of the rice grains % image above. The adaptive method works better if it knows whether to look % for foreground pixels that are brighter or darker than the background. The % optional parameter |'ForegroundPolarity'| lets is specify that. bw = imbinarize(I,'adaptive','ForegroundPolarity','dark'); imshow(bw) title('Adaptive threshold') %% % The new functions |otsuthresh| and |adaptthresh| are for those who want to % have more fine-grained control over the algorithms underlying the global % and adaptive thresholding behavior of |imbinarize|. I'll talk about them % next time. ##### SOURCE END ##### 6c4054d20ae2423ea15cffeaedaf60da -->

Categories: Blogs

Actor Tailor Soldier Spy

Casey McKinnon - 2016, May 16 - 10:49




I did a quick shoot with the Headshot Truck last week to refresh my headshots and get some photos of character types. My agent was enthusiastic about getting a powerful shot in a suit for roles like manipulative politician, lawyer, and agent (of the FBI, of real estate, of A.C.R.O.N.Y.M.S., etc.). The second look she wanted was a strong army look, which could also work great for roles like resistance fighter, local militia member, or apocalypse survivor. And, thanks to the efficient photographer in the Headshot Truck, and my own over-preparedness, I was able to sneak in a third look...  a somewhat period appropriate (and somewhat inappropriate) girl next door type.

I had a good experience with the Headshot Truck, and I may choose to visit them in the future for another look; perhaps doctor/scientist, nerdy intellectual, or Shakespearean ingenue? We shall see. In the meantime, I'm very pleased with the results and I hope they serve their purpose well.

Categories: Blogs

O’Reilly Hardware Podcast on the risks to the open Web and the future of the Internet of Things

Cory Doctorow - 2016, May 11 - 10:36

I appeared on the O’Reilly Hardware Podcast this week (MP3, talking about the way that DRM has crept into all our smart devices, which compromises privacy, security and competition.

In this episode of the Hardware podcast, we talk with writer and digital rights activist Cory Doctorow. He’s recently rejoined the Electronic Frontier Foundation to fight a World Wide Web Consortium proposal that would add DRM to the core specification for HTML. When we recorded this episode with Cory, the W3C had just overruled the EFF’s objection. The result, he says, is that “we are locking innovation out of the Web.”

“It is illegal to report security vulnerabilities in a DRM,” Doctorow says. “[DRM] is making it illegal to tell people when the devices they depend upon for their very lives are unsuited for that purpose.”
Get O’Reilly’s weekly hardware newsletter

In our “Tools” segment, Doctorow tells us about tools that can be used for privacy and encryption, including the EFF surveillance self-defense kit, and Wickr, an encrypted messaging service that allows for an expiration date on shared messages and photos. “We need a tool that’s so easy your boss can use it,” he says.

Cory Doctorow on losing the open Web [O’Reilly Hardware Podcast]

Categories: Blogs

Peace in Our Time: how publishers, libraries and writers could work together

Cory Doctorow - 2016, May 9 - 17:33


Publishing is in a weird place: ebook sales are stagnating; publishing has shrunk to five major publishers; libraries and publishers are at each others’ throats over ebook pricing; and major writers’ groups are up in arms over ebook royalties, and, of course, we only have one major book retailer left — what is to be done?


In my new Locus Magazine column, “Peace in Our Time,” I propose a pair of software projects that could bring all together writers, publishers and libraries to increase competition, give publishers the market intelligence they need to sell more books, triple writers’ ebook royalties, and sell more ebooks to libraries, on much fairer terms.

The first project is a free/open version of Overdrive, the software that publishers insist that libraries use for ebook circulation. A free/open version, collectively created and maintained by the library community, would create a source of data that publishers could use to compete with Amazon, their biggest frenemy, while still protecting patron privacy. The publishers’ quid-pro-quo for this data would be an end to the practice of gouging libraries on ebook prices, leaving them with more capital to buy more books.

The second project is a federated ebook store for writers, that would allow writers to act as retailers for their publishers, selling their own books and keeping the retailer’s share in addition to their traditional royalty: a move that would increase the writer’s share by 300%, without costing the publishers a penny. Writer-operated ebook stores, spread all over the Web but searchable from central portals, do not violate the publishers’ agreements with Amazon, but they do create new sales category: “fair trade ebooks,” whose sale gives the writers you love the money to feed their families and write more books — without costing you anything extra.

Amazon knows, in realtime, how publishers’ books are performing. It knows who is buying them, where they’re buying them, where they’re reading them, what they searched for before buying them, what other books they buy at the same time, what books they buy before and after, whether they read them, how fast they read them, and whether they finish them.

Amazon discloses almost none of this to the publishers, and what information they do disclose to the publishers (the sales data for the publishers’ own books, atomized, without data-mineable associations) they disclose after 30 days, or 90 days, or 180 days. Publishers try to fill in the gaps by buying their own data back from the remaining print booksellers, through subscriptions to point-of-sale databases that have limited relevance to e-book performance.

There is only one database of e-book data that is remotely comparable to the data that Amazon mines to stay ahead of the publishers: e-book circulation data from public libraries. This data is not as deep as Ama­zon’s – thankfully, since it’s creepy and terrible that Amazon knows about your reading habits in all this depth, and it’s right and fitting that libraries have refused to turn on that kind of surveillance for their own e-book circulation.

Peace in Our Time [Cory Doctorow/Locus]

Categories: Blogs

Image binarization – im2bw and graythresh

Matlab Image processing blog - 2016, May 9 - 11:41

As I promised last time, I'm writing a series about functional designs for image binarization in the Image Processing Toolbox. Today I'll start by talking about im2bw and graythresh, two functions that have been in the product for a long time.

The function im2bw appeared in Image Processing Toolbox version 1.0, which shipped in early fall 1993. That was about the time I interviewed for my job at MathWorks. (I was a beta tester of version 1.0.)

Here is the help text from that early function:

%IM2BW Convert image to black and white by thresholding. % BW = IM2BW(X,MAP,LEVEL) converts the indexed image X with % colormap MAP to a black and white intensity image BW. % BW is 0 (black) for all pixels with luminance less % than LEVEL and 1 (white) for all other values. % % BW = IM2BW(I,LEVEL) converts the gray level intensity image % I to black and white. BW is 0 (black) for all pixels with % value less than LEVEL and 1 (white) for all other values. % % BW = IM2BW(R,G,B,LEVEL) converts the RGB image to black % and white. BW is 0 (black) for all pixels with luminance % less than LEVEL and 1 (white) for all other values. % % See also IND2GRAY, RGB2GRAY.

At that time, the prefix "im" in the function name meant that the function could take more than one image type (indexed, intensity, RGB).

At this point in the early history of MATLAB, the language really only had one type. Everything in MATLAB was a double-precision matrix. This affected the early functional design in two ways. First, the toolbox established [0,1] as the conventional dynamic range for gray-scale images. This choice was influenced by the mathematical orientation of MATLAB as well as the fact that there was no one-byte-per-element data type. The second impact on functional design can be seen in the syntax IM2BW(R,G,B,LEVEL). RGB (or truecolor) images had to be represented with three different matrices, one for each color component. I really don't miss those days!

Here are two examples, an indexed image and a gray-scale image.

[X,map] = imread('trees.tif'); imshow(X,map); title('Original indexed image') bw = im2bw(X,map,0.5); imshow(bw) title('Output of im2bw') I = imread('cameraman.tif'); imshow(I) title('Original gray-scale image') xlabel('Cameraman image courtesy of MIT') bw = im2bw(I,0.5); imshow(bw) title('Output of im2bw')

It turns out that im2bw had other syntaxes that did not appear in the documentation. Specifically, the LEVEL argument could be omitted. Here is relevant code fragment:

if isempty(level), % Get level from user level = 0.5; % Use default for now end

Experienced software developers will be amused by the code comment above, "Use default for now". This indicates that the developer intended to go back and do something else here before shipping but never did. Anyway, you can see that a LEVEL of 0.5 is used if you don't specify it yourself.

MATLAB 5 and Image Processing Toolbox version 2.0 shipped in early 1998. These were very big releases for both products. MATLAB 5 featured multidimensional arrays, cell arrays, structs, and many other features. MATLAB 5 also had something else that was big for image processing: numeric arrays that weren't double precision. At the time, you could make uint8, int8, uint16, int16, uint32, int32, and single arrays. However, there was almost no functional support or operator support these arrays. The capability was so limited that we didn't even mention it in the MATLAB 5 documentation.

Image Processing Toolbox 2.0 provided support for (and documented) uint8 arrays. The other types went undocumented and largely unsupported in both MATLAB and the toolbox for a while longer.

Multidimensional array and uint8 support affected almost every function in the toolbox, so version 2.0 was a complex release, especially with respect to compatibility. We wanted to be able to handle uint8 and multidimensional arrays smoothly, to the degree possible, with existing user code.

One of the design questions that arose during this transition concerned the LEVEL argument for im2bw. Should the interpretation of LEVEL be different, depending on the data type of the input image? To increase the chance that existing user code would work as expected without change, even if the image data type changed from double to uint8, we adopted the convention that LEVEL would continue to be specified in the range [0,1], independent of the input image data type. That is, a LEVEL of 0.5 has the same visual effect for a double input image as it does for a uint8 input image.

Now, image processing as a discipline is infamous for its "magic numbers," such as threshold values like LEVEL, that need to be tweaked for every data set. Sometime around 1999 or 2000, we reviewed the literature about algorithms to compute thresholds automatically. There were only a handful that seemed to work reasonably well for a broad class of images, and one in particular seemed to be both popular and computationally efficient: N. Otsu, "A Threshold Selection Method from Gray-Level Histograms," IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, 1979, pp. 62-66. This is the one we chose to implement for the toolbox. It is the algorithm under the hood of the function graythresh, which was introduced in version 3.0 of the toolbox in 2001.

The function graythresh was designed to work well with the function im2bw. It takes a gray-scale image and returns the same normalized LEVEL value that im2bw uses. For example:

level = graythresh(I) level = 0.3451 bw = im2bw(I,level); imshow(bw) title('Level computed by graythresh')

Aside from multilevel thresholding introduced in R2012b, this has been the state of image binarization in the Image Processing Toolbox for about the last 15 years.

There are a few weaknesses in this set of functional designs, though, and these weaknesses eventually led the development to consider an overhaul.

  • Most people felt that the value returned by graythresh would have been a better default LEVEL than 0.5.
  • If you don't need to save the value of LEVEL, then you end up calling the functions in a slightly awkward way, passing the input image to each of the two functions: bw = im2bw(I,graythresh(I))
  • Although Otsu's method really only needs to know the image histogram, you have to pass in the image itself to the graythresh function. This is awkward for some use cases, such as using the collective histogram of multiple images in a dataset to compute a single threshold.
  • There was no locally adaptive thresholding method in the toolbox.

Next time I plan to discuss the new image binarization functional designs in R2016a.

Also, thanks very much to ez, PierreC, Matt, and Mark for their comments on the previous post.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

, I'm writing a series about functional designs for image % binarization in the Image Processing Toolbox. Today I'll start by talking % about and % , % two functions that have been in the product for a long time. % % The function |im2bw| appeared in Image Processing Toolbox version 1.0, % which shipped in early fall 1993. That was about the time I interviewed % for my job at MathWorks. (I was a beta tester of version 1.0.) % % Here is the help text from that early function: % % %IM2BW Convert image to black and white by thresholding. % % BW = IM2BW(X,MAP,LEVEL) converts the indexed image X with % % colormap MAP to a black and white intensity image BW. % % BW is 0 (black) for all pixels with luminance less % % than LEVEL and 1 (white) for all other values. % % % % BW = IM2BW(I,LEVEL) converts the gray level intensity image % % I to black and white. BW is 0 (black) for all pixels with % % value less than LEVEL and 1 (white) for all other values. % % % % BW = IM2BW(R,G,B,LEVEL) converts the RGB image to black % % and white. BW is 0 (black) for all pixels with luminance % % less than LEVEL and 1 (white) for all other values. % % % % See also IND2GRAY, RGB2GRAY. % % At that time, the prefix "im" in the function name meant that the function % could take more than one image type (indexed, intensity, RGB). % % At this point in the early history of MATLAB, the language really only had % one type. Everything in MATLAB was a double-precision matrix. This affected % the early functional design in two ways. First, the toolbox established [0,1] % as the conventional dynamic range for gray-scale images. This choice was % influenced by the mathematical orientation of MATLAB as well as the fact % that there was no one-byte-per-element data type. The second impact on % functional design can be seen in the syntax IM2BW(R,G,B,LEVEL). RGB (or % truecolor) images had to be represented with three different matrices, one % for each color component. I really don't miss those days! % % Here are two examples, an indexed image and a gray-scale image. [X,map] = imread('trees.tif'); imshow(X,map); title('Original indexed image') %% bw = im2bw(X,map,0.5); imshow(bw) title('Output of im2bw') %% I = imread('cameraman.tif'); imshow(I) title('Original gray-scale image') xlabel('Cameraman image courtesy of MIT') %% bw = im2bw(I,0.5); imshow(bw) title('Output of im2bw') %% % It turns out that |im2bw| had other syntaxes that did not appear in the % documentation. Specifically, the |LEVEL| argument could be omitted. Here % is relevant code fragment: % % if isempty(level), % Get level from user % level = 0.5; % Use default for now % end % % Experienced software developers will be amused by the code comment above, % "Use default for now". This indicates that the developer intended to go % back and do something else here before shipping but never did. Anyway, % you can see that a |LEVEL| of 0.5 is used if you don't specify it % yourself. % % MATLAB 5 and Image Processing Toolbox version 2.0 shipped in early 1998. % These were very big releases for both products. MATLAB 5 featured % multidimensional arrays, cell arrays, structs, and many other features. % MATLAB 5 also had something else that was big for image processing: % numeric arrays that weren't double precision. At the time, you could make % uint8, int8, uint16, int16, uint32, int32, and single arrays. However, % there was almost no functional support or operator support these arrays. % The capability was so limited that we didn't even mention it in the % MATLAB 5 documentation. % % Image Processing Toolbox 2.0 provided support for (and documented) uint8 % arrays. The other types went undocumented and largely unsupported in both % MATLAB and the toolbox for a while longer. % % Multidimensional array and uint8 support affected almost every function in % the toolbox, so version 2.0 was a complex release, especially with respect % to compatibility. We wanted to be able to handle uint8 and % multidimensional arrays smoothly, to the degree possible, with existing % user code. % % One of the design questions that arose during this transition concerned the % |LEVEL| argument for |im2bw|. Should the interpretation of |LEVEL| be % different, depending on the data type of the input image? To increase the % chance that existing user code would work as expected without change, even % if the image data type changed from double to uint8, we adopted the % convention that |LEVEL| would continue to be specified in the range [0,1], % independent of the input image data type. That is, a |LEVEL| of 0.5 has % the same visual effect for a double input image as it does for a uint8 % input image. % % Now, image processing as a discipline is infamous for its "magic numbers," % such as threshold values like |LEVEL|, that need to be tweaked for every % data set. Sometime around 1999 or 2000, we reviewed the literature about % algorithms to compute thresholds automatically. There were only a handful % that seemed to work reasonably well for a broad class of images, and one % in particular seemed to be both popular and computationally efficient: N. % Otsu, "A Threshold Selection Method from Gray-Level Histograms," IEEE % Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, 1979, pp. % 62-66. This is the one we chose to implement for the toolbox. It is the % algorithm under the hood of the function % , % which was introduced in version 3.0 of the toolbox in 2001. % % The function |graythresh| was designed to work well with the function % |im2bw|. It takes a gray-scale image and returns the same normalized % |LEVEL| value that |im2bw| uses. For example: level = graythresh(I) %% bw = im2bw(I,level); imshow(bw) title('Level computed by graythresh') %% % Aside from introduced in R2012b, this has been the % state of image binarization in the Image Processing Toolbox for about the % last 15 years. % % There are a few weaknesses in this set of functional designs, though, and % these weaknesses eventually led the development to consider an overhaul. % % * Most people felt that the value returned by |graythresh| would have been % a better default |LEVEL| than 0.5. % * If you don't need to save the value of |LEVEL|, then you end up calling % the functions in a slightly awkward way, passing the input image % to each of the two functions: |bw = im2bw(I,graythresh(I))| % * Although Otsu's method really only needs to know the image histogram, % you have to pass in the image itself to the |graythresh| function. This % is awkward for some use cases, such as using the collective histogram of % multiple images in a dataset to compute a single threshold. % * There was no locally adaptive thresholding method in the toolbox. % % Next time I plan to discuss the new image binarization functional designs % in R2016a. % % Also, thanks very much to ez, PierreC, Matt, and Mark for their comments % on the previous post. ##### SOURCE END ##### 608aca8495da4186a9b9ae09d9990efe -->

Categories: Blogs

The open web’s guardians are acting like it’s already dead

Cory Doctorow - 2016, May 3 - 11:02

The World Wide Web Consortium — an influential standards body devoted to the open web — used to make standards that would let anyone make a browser that could view the whole Web; now they’re making standards that let the giant browser companies and giant entertainment companies decide which browsers will and won’t work on the Web of the future.

When you ask them why they’re doing this, they say that the companies are going to build technology that locks out new entrants no matter what they do, and by capitulating to them, at least there’s a chance of softening the control the giants will inevitably get.

In my latest Guardian column, Why the future of web browsers belongs to the biggest tech firms, I explain how the decision of the W3C to let giant corporations lock up the Web betrays a belief that the open Web is already dead, and all that’s left to argue about are the terms on which our new overlords will present to us.

Today is the International Day Against DRM. EME, the W3C project that hands control over the Web to giant corporations, uses DRM to assert this control.

We will get the open Web we deserve. If you and I and everyone we know stand up to the bullies who want to use entertainment technology to seize control over the future, we can win.

Otherwise, we’ll be Huxleyed into the full Orwell.

Make it easy for today’s crop of web giants to sue any new entrants into oblivion and you can be pretty certain there won’t be any new entrants.

It marks a turning point in the history of those companies. Where once web giants were incubators for the next generation of entrepreneurs who struck out and started competitors that eclipsed their former employers, now those employees are setting the stage for a future where they can stay where they are, or slide sideways to another giant. Forget overturning the current order, though. Maybe they, too, think the web is cooked.

In case there was any doubt of where the W3C stood on whether the future web needed protection from the giants of today, that doubt was dispelled last month. Working with the Electronic Frontier Foundation, I proposed that the W3C adapt its existing policies – which prohibit members from using their patents to block new web companies – to cover EME, a moved that was supported by many W3C members.

Rather than adopt this proposal or a version of it, last month, the W3C executive threw it out, giving the EME group a green light to go forward with no safeguards whatsoever.

Why the future of web browsers belongs to the biggest tech firms
[The Guardian]

Categories: Blogs

Image binarization – new functional designs

Matlab Image processing blog - 2016, April 28 - 04:00

With the very first version of the Image Processing Toolbox, released more than 22 years ago, you could convert a gray-scale image to binary using the function im2bw.

I = imread('rice.png'); imshow(I) title('Original gray-scale image') bw = im2bw(I); imshow(bw) title('Binary image')

You can think of this as the most fundamental form of image segmentation: separating pixels into two categories (foreground and background).

Aside from the introduction of graythresh in the mid-1990s, this area of the Image Processing Toolbox has stayed quietly unchanged. Now, suddenly, the latest release (R2016a) has introduced an overhaul of binarization. Take a look at the release notes:

imbinarize, otsuthresh, and adaptthresh: Threshold images using global and locally adaptive thresholds

The toolbox includes the new function, imbinarize, that converts grayscale images to binary images using global threshold or a locally adaptive threshold. The toolbox includes two new functions, otsuthresh and adaptthresh, that provide a way to determine the threshold needed to convert a grayscale image into a binary image.

What's up with this? Why were new functions needed?

I want to take advantage of this functionality update to dive into the details of image binarization in a short series of posts. Here's what I have in mind:

  1. The state of image binarization in the Image Processing Toolbox prior to R2016a. How did it work? What user pains motivated the redesign?
  2. How binarization works in R2016a
  3. Otsu's method for computing a global threshold
  4. Bradley's method for computing an adaptive threshold

Mostly, I haven't written this yet. If there is something particular you'd like to know, tell me in the comments, and I'll try to work it in.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

: % % _|imbinarize|, |otsuthresh|, and |adaptthresh|: Threshold images using % global and locally adaptive thresholds_ % % _The toolbox includes the new function, |imbinarize|, that converts % grayscale images to binary images using global threshold or a locally % adaptive threshold. The toolbox includes two new functions, |otsuthresh| % and |adaptthresh|, that provide a way to determine the threshold needed to % convert a grayscale image into a binary image._ % % What's up with this? Why were new functions needed? % % I want to take advantage of this functionality update to dive into the % details of image binarization in a short series of posts. Here's what I % have in mind: % % # The state of image binarization in the Image Processing Toolbox prior to % R2016a. How did it work? What user pains motivated the redesign? % # How binarization works in R2016a % # Otsu's method for computing a global threshold % # Bradley's method for computing an adaptive threshold % % Mostly, I haven't written this yet. If there is something particular you'd % like to know, tell me in the comments, and I'll try to work it in. ##### SOURCE END ##### e4142a33b9c946be8096a6560586ff18 -->

Categories: Blogs

Problem complexity

Matlab Image processing blog - 2016, April 26 - 11:43

Twice in the last month, I have read comments about certain problems being intrinsically hard to solve.

In a fascinating book I just started to read called Algorithms to Live By, authors Brian Christian and Tom Griffiths say:

Life is full of problems that are, quite simply, hard. And the mistakes made by people often say more about the intrinsic difficulties of the problem than about the fallibility of human brains. (p. 5)

And on the blog Walking Randomly, Mike Croucher observed:

I see [a particular] thought pattern in a lot of areas. The pattern looks like this:

It’s hard to do ‘thing’. Most smart people do ‘thing’ with ‘foo’ and, since ‘thing’ is hard, many people have experienced problems with ‘foo’. Hence, people bash ‘foo’ a lot. ‘foo’ sucks!

We run into the question of problem complexity a lot in MATLAB design. For example, many people get confused trying to write a recursive function. Is that because the language design is flawed in some way? I tend to think the root cause can instead be found in the inherent conceptual complexity associated with recursion.

That said, we always have to guard against complacency. Any time we see our users have difficulty completing their tasks, we look for ways to improve our product design.

Categories: Blogs

CLim, caxis, imshow, and imagesc

Matlab Image processing blog - 2016, April 25 - 09:12

In response to "MATLAB image display - autoscaling values with imshow," MATLAB Answerer Extraordinaire ImageAnalyst posted this comment:

A discussion of the relationship and interplay of caxis(), CLim, and the values you can pass in inside the brackets to imshow() or imagesc() might be useful. For example, let's say your values range from 200 to 35,000, and you want all values less than 1000 to be blue and all values more than 29000 to be red. And you want a colorbar with 16 steps - 16 discrete colors. How would one go about that, and using which functions?

Good question! Let's have a go at it, starting with CLim.

CLim is a property of an Axes object.

To investigate CLim, start with imagesc, some elevation data, and a color bar.

load mt_monadnock.mat imagesc(Zc) axis image colorbar

The Axes object controls many aspects of the plot, including the axes rulers, the ticks, the tick labels, the grid lines, and much more. The function gca ("get current axes") returns the Axes object.

ax = gca ax = Axes with properties: XLim: [0.5000 3.0425e+03] YLim: [0.5000 3.0425e+03] XScale: 'linear' YScale: 'linear' GridLineStyle: '-' Position: [0.1168 0.1100 0.6965 0.8150] Units: 'normalized' Use GET to show all properties

The Axes object has a large number of properties, so by default MATLAB shows you just the most commonly used ones. If you run this code interactively, you would see a clickable "Show all properties" link.

One of those properties is CLim ("color limits"), which you can access directly this way:

ax.CLim ans = 141.5285 963.1366

If you look closely at the color bar in the image plot above, you can see the correspondence between it and the CLim values. ax.CLim(1) is the bottom value on the color bar, and ax.Clim(2) is the top value.

Where did those values come from, though?

They were automatically computed from the range of the data being plotted.

min(Zc(:)) ans = 141.5285 max(Zc(:)) ans = 963.1366

You can set the CLim yourself, though, and that changes the way the color is scaled from the data values. Let's set the color limits to expand the visible details of the lower elevations.

ax.CLim = [140 400];

Or maybe you want to examine the upper elevations.

ax.CLim = [600 970];

ImageAnalyst mentioned the function caxis. That's just a convenient way to set the color limits. It's one step shorter than getting the Axes using gca and then setting its CLim property.

caxis([400 600])

You can also use caxis to quickly get back to automatic computation of color limits.

caxis('auto')

Then ImageAnalyst asked about the [low high] syntax for imagesc and imshow. This is just another convenience for setting the color limits.

imagesc(Zc,[400 600]) axis image colorbar ax = gca; ax.CLim ans = 400 600

The final part of ImageAnalyst's comment concerned the number of colors. What if you only want 16 colors? Well, all of the MATLAB colormap functions take an optional input argument specifying the number of colors to use. So just call the colormap function that you want to use and pass it the desired number of colors.

caxis('auto') colormap(parula(16)) title('Full range, 16 colors') \n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

% MATLAB Answerer Extraordinaire posted this comment: % % _A discussion of the relationship and interplay of caxis(), CLim, and the % values you can pass in inside the brackets to imshow() or imagesc() might % be useful. For example, let?s say your values range from 200 to 35,000, % and you want all values less than 1000 to be blue and all values more than % 29000 to be red. And you want a colorbar with 16 steps - 16 discrete % colors. How would one go about that, and using which functions?_ % % Good question! Let's have a go at it, starting with |CLim|. % % |CLim| is a property of an object. % % To investigate |CLim|, start with |imagesc|, some elevation data, and a % color bar. load mt_monadnock.mat imagesc(Zc) axis image colorbar %% % The |Axes| object controls many aspects of the plot, including the axes % rulers, the ticks, the tick labels, the grid lines, and much more. The % function |gca| ("get current axes") returns the |Axes| object. ax = gca %% % The |Axes| object has a large number of properties, so by default MATLAB % shows you just the most commonly used ones. If you run this code % interactively, you would see a clickable "Show all properties" link. % % One of those properties is |CLim| ("color limits"), which you can access % directly this way: ax.CLim %% % If you look closely at the color bar in the image plot above, you can % see the correspondence between it and the |CLim| values. |ax.CLim(1)| is % the bottom value on the color bar, and |ax.Clim(2)| is the top value. % % Where did those values come from, though? % % They were automatically computed from the range of the data being plotted. min(Zc(:)) %% max(Zc(:)) %% % You can set the |CLim| yourself, though, and that changes the way the % color is scaled from the data values. Let's set the color limits to expand % the visible details of the lower elevations. ax.CLim = [140 400]; %% % Or maybe you want to examine the upper elevations. ax.CLim = [600 970]; %% % ImageAnalyst mentioned the function |caxis|. That's just a convenient way % to set the color limits. It's one step shorter than getting the |Axes| % using |gca| and then setting its |CLim| property. caxis([400 600]) %% % You can also use |caxis| to quickly get back to automatic computation of % color limits. caxis('auto') %% % Then ImageAnalyst asked about the [low high] syntax for |imagesc| and % |imshow|. This is just another convenience for setting the color limits. imagesc(Zc,[400 600]) axis image colorbar %% ax = gca; ax.CLim %% % The final part of ImageAnalyst's comment concerned the number of colors. % What if you only want 16 colors? Well, all of the MATLAB colormap % functions take an optional input argument specifying the number of colors % to use. So just call the colormap function that you want to use and pass % it the desired number of colors. caxis('auto') colormap(parula(16)) title('Full range, 16 colors') ##### SOURCE END ##### 35430028b727476987445ca21909da11 -->

Categories: Blogs

JPEG2000 and specifying a target compression ratio

Matlab Image processing blog - 2016, April 14 - 11:14

I was looking at some documentation yesterday and saw something that I had forgotten. When you write a JPEG2000 image file using imwrite, you can specify a desired compression ratio.

OK, that sounds fun. Let's try it with the peppers image.

rgb = imread('peppers.png'); imshow(rgb) title('Original image')

Shall we start with a modest compression ratio of 10?

imwrite(rgb,'peppers_10.j2k','CompressionRatio',10); imshow('peppers_10.j2k') title('Target compression ratio: 10')

That looks the same.

Hold on, though. Let's make sure we agree on what compression ratio means. And while we're at it, let's check the output file to verify the actual ratio.

The in-memory storage format for this image is 3 color values for each pixel, and each color value is stored using 1 byte. So the total number of in-memory bytes to represent the image is the number of rows times the number of columns times 3.

size(rgb) ans = 384 512 3 num_mem_bytes = prod(ans) num_mem_bytes = 589824

Now let's figure out the size of the JPEG2000 file we just created.

s = dir('peppers_10.j2k'); num_file_bytes = s.bytes num_file_bytes = 58384

The compression ratio is the ratio of those two numbers.

r = num_mem_bytes / num_file_bytes r = 10.1025

That's pretty close to the specified target.

Let's compress the image by a factor of 20.

imwrite(rgb,'peppers_20.j2k','CompressionRatio',20) imshow('peppers_20.j2k') title('Target compression ratio: 20')

I'm still seeing very little difference. Let's dial it all the way up to 100.

imwrite(rgb,'peppers_100.j2k','CompressionRatio',100) imshow('peppers_100.j2k') title('Target compression ratio: 100')

Now that's getting pretty bad. Let's zoom into a region and compare more closely.

subplot(1,2,1) imshow(rgb) xlim([310 440]) ylim([30 130]) title('Original') subplot(1,2,2) imshow('peppers_100.j2k') xlim([310 440]) ylim([30 130]) title('Target compression ratio: 100')

How extreme can we get? Let's try 1000.

imwrite(rgb,'peppers_1000.j2k','CompressionRatio',1000) clf imshow('peppers_1000.j2k') title('Target compression ratio: 1000')

That's pretty ugly. But how big is the file? Did we get close to the target?

s = dir('peppers_1000.j2k'); s.bytes ans = 545 num_mem_bytes / s.bytes ans = 1.0822e+03

That last image is stored using just 545 bytes! That's a compression ratio of about 1082.

I know JPEG2000 is used in some medical imaging systems, and I have also heard that the Library of Congress uses it.

Do you use JPEG2000? If so, please leave a comment. I'd be interested to hear about your application.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

, you can % specify a desired compression ratio. % % OK, that sounds fun. Let's try it with the peppers image. rgb = imread('peppers.png'); imshow(rgb) title('Original image') %% % Shall we start with a modest compression ratio of 10? imwrite(rgb,'peppers_10.j2k','CompressionRatio',10); imshow('peppers_10.j2k') title('Target compression ratio: 10') %% % That looks the same. % % Hold on, though. Let's make sure we agree on what compression ratio means. % And while we're at it, let's check the output file to verify the actual % ratio. % % The in-memory storage format for this image is 3 color values for each % pixel, and each color value is stored using 1 byte. So the total number of % in-memory bytes to represent the image is the number of rows times the % number of columns times 3. size(rgb) %% num_mem_bytes = prod(ans) %% % Now let's figure out the size of the JPEG2000 file we just created. s = dir('peppers_10.j2k'); num_file_bytes = s.bytes %% % The compression ratio is the ratio of those two numbers. r = num_mem_bytes / num_file_bytes %% % That's pretty close to the specified target. % % Let's compress the image by a factor of 20. imwrite(rgb,'peppers_20.j2k','CompressionRatio',20) imshow('peppers_20.j2k') title('Target compression ratio: 20') %% % I'm still seeing very little difference. Let's dial it all the way up to % 100. imwrite(rgb,'peppers_100.j2k','CompressionRatio',100) imshow('peppers_100.j2k') title('Target compression ratio: 100') %% % Now that's getting pretty bad. Let's zoom into a region and compare more % closely. subplot(1,2,1) imshow(rgb) xlim([310 440]) ylim([30 130]) title('Original') subplot(1,2,2) imshow('peppers_100.j2k') xlim([310 440]) ylim([30 130]) title('Target compression ratio: 100') %% % How extreme can we get? Let's try 1000. imwrite(rgb,'peppers_1000.j2k','CompressionRatio',1000) clf imshow('peppers_1000.j2k') title('Target compression ratio: 1000') %% % That's pretty ugly. But how big is the file? Did we get close to the % target? s = dir('peppers_1000.j2k'); s.bytes %% num_mem_bytes / s.bytes %% % That last image is stored using just 545 bytes! That's a % compression ratio of about 1082. % % I know JPEG2000 is used in some medical imaging systems, and I have also % heard that the Library of Congress uses it. % % Do you use JPEG2000? If so, please leave a comment. I'd be interested to % hear about your application. ##### SOURCE END ##### 1502e1add6094638beb22f92f95fca6d -->

Categories: Blogs

Intersecting curves that don’t intersect

Matlab Image processing blog - 2016, April 12 - 14:36

A post in MATLAB Answers earlier this year reminded me that working on a discrete grid can really mess up apparently obvious notions about geometry.

User Hg offered an image containing two intersecting curves. The problem is that the intersecting curves didn't intersect!

Here's the image:

The red curve and the blue curve, which obviously cross each other, do not have any pixels in common. Hg's question: "What can I do to estimate the intersection point?"

Before I get into answering the original question, let me take a brief side trip to see if I can reconstruct the original input matrix (or at least something close). The image posted on MATLAB Answers had the pixels obviously blown up, with different colors applied to distinguish between the curves. Can I get back to just a matrix with 0s, 1s, and 2s corresponding to pixels belonging to the background and the two curves?

It helps that the image is a PNG file, which is losslessly compressed. I took a close look at the pixels using imtool, and I was able to determine that pixels belong to one curve were colored using [237 28 36], and pixels belonging to the second curve were colored using [0 162 232]. Also, each pixel seemed to be magnified by about a factor of 16. Let's run with those numbers.

url = 'http://www.mathworks.com/matlabcentral/answers/uploaded_files/44249/intersect2.png'; rgb = imread(url); red = rgb(:,:,1); green = rgb(:,:,2); blue = rgb(:,:,3); curve1 = (red == 237) & (green == 28) & (blue == 36); curve2 = (red == 0) & (green == 162) & (blue == 232); L = zeros(size(curve1)); L(curve1) = 1; L(curve2) = 2; imshow(L,[],'InitialMagnification','fit')

That looks good. Let's try shrinking it down by a factor of 16.

L16 = imresize(L,1/16,'nearest'); imshow(L16,[],'InitialMagnification','fit');

Well, that's not perfect. But it's close enough to work with for answering Hg's question.

Image Analyst offered a potential solution:

There is no one pixel where the overlap occurs. If you'll accept any of those 4 pixel locations as an overlap, then perhaps if you dilated, ANDed, then called bwulterode. Something like (untested)

intImage = imdilate(bw1, true(3)) & imdilate(bw2, true(3)); intPoints = bwulterode(intImage);

Image Analyst's idea is to use morphological dilation to thicken each curve individually, then use a logical AND operator (&) to determine the set of pixels that belong to both thickened curves, and then compute the ultimate erosion using bwerode to shrink the intersection area down. Here it is in several steps.

curve1 = (L16 == 1); curve2 = (L16 == 2); curve1_thickened = imdilate(curve1,ones(3,3)); imshow(curve1_thickened,'InitialMagnification','fit') title('Curve 1 (thickened)') curve2_thickened = imdilate(curve2,ones(3,3)); imshow(curve2_thickened,'InitialMagnification','fit') title('Curve 2 (thickened)') curve_intersection = curve1_thickened & curve2_thickened; imshow(curve_intersection,'InitialMagnification','fit') title('Intersection of thickened curves') ultimate_erosion = bwulterode(curve_intersection); imshow(ultimate_erosion,'InitialMagnification','fit') title('Ultimate erosion of intersection')

Let's overlay the output of the ultimate erosion on top of the original image.

imshow(imfuse(L16,ultimate_erosion,'falsecolor'),'InitialMagnification','fit')

You can see that the ultimate erosion is not a single point. An alternative is to compute the centroids of the intersection "blobs."

s = regionprops(ultimate_erosion,'Centroid') s = Centroid: [9.5000 6.5000]

There's just one intersection blob, and its centroid is at (9.5,6.5). Let's plot that.

imshow(L16,[],'InitialMagnification','fit') hold on plot(s.Centroid(1),s.Centroid(2),'go','MarkerSize',15,... 'MarkerFaceColor','g') hold off

How would you approach this problem? Leave your ideas and thoughts in the comments.

\n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

earlier this year reminded me that working on a % discrete grid can really mess up apparently obvious notions about % geometry. % % User offered an image containing two intersecting curves. The problem is % that the intersecting curves didn't intersect! % % Here's the image: % % <> % % The red curve and the blue curve, which obviously cross each other, do not % have any pixels in common. Hg's question: "What can I do to estimate the % intersection point?" % % Before I get into answering the original question, let me take a brief side trip % to see if I can reconstruct the original input matrix (or at least % something close). The image posted on MATLAB Answers had the pixels % obviously blown up, with different colors applied to distinguish between % the curves. Can I get back to just a matrix with 0s, 1s, and 2s % corresponding to pixels belonging to the background and the two curves? % % It helps that the image is a PNG file, which is losslessly compressed. I % took a close look at the pixels using |imtool|, and I was able to determine % that pixels belong to one curve were colored using [237 28 36], and pixels % belonging to the second curve were colored using [0 162 232]. Also, each % pixel seemed to be magnified by about a factor of 16. Let's run with those % numbers. url = 'http://www.mathworks.com/matlabcentral/answers/uploaded_files/44249/intersect2.png'; rgb = imread(url); red = rgb(:,:,1); green = rgb(:,:,2); blue = rgb(:,:,3); %% curve1 = (red == 237) & (green == 28) & (blue == 36); curve2 = (red == 0) & (green == 162) & (blue == 232); L = zeros(size(curve1)); L(curve1) = 1; L(curve2) = 2; imshow(L,[],'InitialMagnification','fit') %% % That looks good. Let's try shrinking it down by a factor of 16. L16 = imresize(L,1/16,'nearest'); imshow(L16,[],'InitialMagnification','fit'); %% % Well, that's not perfect. But it's close enough to work with for answering % Hg's question. % % offered a potential solution: % % _There is no one pixel where the overlap occurs. If you'll accept any of % those 4 pixel locations as an overlap, then perhaps if you dilated, ANDed, % then called bwulterode. Something like (untested)_ % % intImage = imdilate(bw1, true(3)) & imdilate(bw2, true(3)); % intPoints = bwulterode(intImage); % % Image Analyst's idea is to use morphological dilation to thicken each % curve individually, then use a logical AND operator (|&|) to determine the % set of pixels that belong to both thickened curves, and then compute the % ultimate erosion using |bwerode| to shrink the intersection area down. % Here it is in several steps. curve1 = (L16 == 1); curve2 = (L16 == 2); curve1_thickened = imdilate(curve1,ones(3,3)); imshow(curve1_thickened,'InitialMagnification','fit') title('Curve 1 (thickened)') %% curve2_thickened = imdilate(curve2,ones(3,3)); imshow(curve2_thickened,'InitialMagnification','fit') title('Curve 2 (thickened)') %% curve_intersection = curve1_thickened & curve2_thickened; imshow(curve_intersection,'InitialMagnification','fit') title('Intersection of thickened curves') %% ultimate_erosion = bwulterode(curve_intersection); imshow(ultimate_erosion,'InitialMagnification','fit') title('Ultimate erosion of intersection') %% % Let's overlay the output of the ultimate erosion on top of the original % image. imshow(imfuse(L16,ultimate_erosion,'falsecolor'),'InitialMagnification','fit') %% % You can see that the ultimate erosion is not a single point. An % alternative is to compute the centroids of the intersection "blobs." s = regionprops(ultimate_erosion,'Centroid') %% % There's just one intersection blob, and its centroid is at (9.5,6.5). % Let's plot that. imshow(L16,[],'InitialMagnification','fit') hold on plot(s.Centroid(1),s.Centroid(2),'go','MarkerSize',15,... 'MarkerFaceColor','g') hold off %% % How would you approach this problem? Leave your ideas and thoughts in the % comments. ##### SOURCE END ##### 2a4c25d036bb4df48261232f9a96d98e -->

Categories: Blogs

Paperback Book Tour April 19-30th!

Flog - 2016, April 11 - 14:06

Just a heads up I’ll be on paperback book tour starting next week! A lot of Midwest action, and there’s a bonus chapter in this version, so lots of goodies!

I’m only doing one or two conventions this year, so if you wanna come meet me this is the best chance! Details below:
TUESDAY, APRIL 19, 2016 – LOS ANGELES
Live Talks Los Angeles at the Bootleg Theater | 8:00pm
2220 Beverly Boulevard
Los Angeles, CA 90057
Details: http://smarturl.it/FDayLiveTalks

WEDNESDAY, APRIL 20, 2016 – PORTLAND, OR
Powell’s Books at Cedar Hills Crossing | 6:00pm
3415 SW Cedar Hills Boulevard
Beaverton, OR 97005
Details: http://smarturl.it/FDayPowells
THURSDAY APRIL 21, 2016 – DENVER
Tattered Cover Bookstore | 7:00pm
2526 East Colfax Avenue
Denver, CO 80206
Details: http://smarturl.it/FDayTatteredCover

SATURDAY APRIL 23, 2016 – CHICAGO
Anderson’s Bookshop | 2:00pm
123 W Jefferson Avenue
Naperville, IL 60540
Details: http://smarturl.it/FDayAndersons

SUNDAY APRIL 24, 2016 – CHICAGO
Ebenezer Lutheran Church (hosted by Women & Children First | 4:00pm
1650 West Foster Avenue
Chicago, IL 60640
Details: http://smarturl.it/FDayWomen

MONDAY APRIL 25, 2016 – MILWAUKEE, WI
Boswell Book Company | 7:00pm
2559 North Downer Avenue
Milwaukee, WI 53211
Details: http://smarturl.it/FDayBoswell

TUESDAY APRIL 26, 2016 – CINCINNATI, OH
Joseph-Beth Booksellers | 7:00pm
2692 Madison Road
Cincinnati, OH 45208
Details: http://smarturl.it/FDayJosephBeth

WEDNESDAY APRIL 27, 2016 – CARRBORO, NC
Flyleaf Books at Cat’s Cradle | 6:00pm
300 East Main Street
Carrboro, NC 27510
Details: http://smarturl.it/FDayFlyleaf

FRIDAY APRIL 29, 2016 – ALBUQUERQUE, NM
University of New Mexico—Woodward Hall (hosted by Bookworks) | 6:00pm
Yale Boulevard NE
Albuquerque, NM 87106
Details: http://smarturl.it/FDayBkwrks

Categories: Blogs

It's never too late to play an 8-year-old girl...

Casey McKinnon - 2016, April 1 - 16:36

Last week I had the opportunity to perform at The Autry in The Baby Snooks Show, a comedic radio play from the 1940s that originally starred Fanny Brice. My character, Henrietta, was the privileged little arch nemesis to Baby Snooks and I had a thrilling time portraying her to a packed house in the 206-seat Wells Fargo Theater.

The event was organized by SAG-AFTRA and featured two other golden era radio plays; Gunsmoke, and an episode of Five Minute Mysteries. The script for every radio play was unchanged from its original air date and included commercials and live sound effects. Our episode of The Baby Snooks Show was titled "The Ugly Duckling" and originally aired on October 24, 1947. It featured some hilarious Jell-O ads that became a running joke weaving throughout the story, and the play concluded with a delightful jingle for Log Cabin maple syrup performed by our quartet of male singers.

As for the role of Henrietta? I was born to play it. I was raised on reruns, and I've spent years mimicking classic children's voices, so it's about time I get to use those voices! I really should be doing more voiceover roles, so this has reminded me that I need to record a voice reel. I do have a voiceover role coming out that I can't wait to share with you, especially if you're a video game fan! Details soon.

I had a great time working with this amazing cast under the direction of Lee Purcell, who you may know from her extensive list of acting credits. I'm very grateful to have been a part of this production, and I hope I get to work with these talented actors again soon.

My favorite scene partner had to be Mariel Sheets, who played Sally in The Peanuts Movie last year. Mariel is the first kid I've had the opportunity to work with professionally, and she made every rehearsal a joy. Her energy reminded me what it's like to be an actual kid again, and her intelligence reminded me never to take children for granted!

Special thanks to Devon, Paul, and Rudy for making the schlep to Griffith Park, and thanks to photographer Michael C. Sheets for these great shots:


The cast of The Baby Snooks Show

From left: Shawn Ryan, Mandy Schneider, Lee Purcell, Sherry Weston, Paul Whetstone, Mariel Sheets, Casey McKinnon, Andrew James Scott, Dan Velasquez, and Jason Mack Watkins












Director Lee Purcell


Live SFX created by Bob Telford and Amalisha HuEck

Categories: Blogs

Ambitious Tech: Why I Joined Connected Lab

Will Pate - 2016, March 29 - 04:01

My Robot, My Friend

The first sentence I learned to read was, “Bad command or file name.”

There was a moment just before I learned how to read, where my motivation was very high. That was the day computer showed up in our house. When I turned it on, it showed a DOS prompt. There were no instructions. I would have to figure out how to speak to the computer in the language it wanted.

A few years later, on Christmas Eve, an Omnibot 2000 came to our door. In a sad voice, he told my little brother and me that he had no friends and no toys. We were overcome with a mixture of sympathy and excitement. We enthusiastically invited him in to be our friend and play with our toys.

Those childhood moments showed me the spectrum of technology: on one end it could be obtuse and confusing. On the other end, it could make you not just feel positive emotions, but bring out the best in you. That’s why I’ve always wanted technology to be more connected, more human.

The Age of Ambitious Tech

When I worked with the World Bank, I saw how a lack of ambition about technology could hold back even some of the smartest people in the world. Countless hours of top subject matter experts’ time was wasted dealing with legacy enterprise systems like Lotus Notes.

At Xtreme Labs, I saw how ambition about technology could change industries. We built some of the most successful mobile apps ever created. Apps that sit on your home screen. Apps used by billions of people.

When I worked in Silicon Valley, I saw how ambitious they were compared to Canadians. A decade later, Toronto is becoming a city for ambitious tech companies. Whether it’s an established tech company like Shopify or an emerging one like Connected Lab, firms from our city can have a huge global impact.

Let’s Get Connected

As technology has made the world more complex, I have made a career of breaking it down and helping people understand it. Clearly, I belong at an ambitious tech company.

That’s why I’m so excited to join Connected Lab. Colleagues I worked with at Xtreme Labs started the firm. We are already helping some of the world’s most ambitious companies develop truly great connected software.

My title is Head of Strategy. My job is to help organizations understand how to succeed in this new era. I hope I can do my small part to help make the world a better place by helping ambitious companies understand why and how to create experiences that are more connected, more human.

We are growing fast. If you’d like to work on ambitious technology, drop me a line. I’d love to hear from you!

Categories: Blogs

Pausing MATLAB (R2016a)

Matlab Image processing blog - 2016, March 23 - 04:00

MATLAB R2016a shipped earlier this month. It has a new feature that is a personal favorite: the Pause button.

Have you ever sat watching MATLAB busily running your code, thinking that it was taking too long? And did you think to yourself, “Do I interrupt it or not? What if it’s really almost finished? And why is it taking so long?”

Well, now you can just hit the Pause button. MATLAB will stop and put you in the debugger, where you can see what’s going on, and then you can continue execution if you like.

Hope on over to Stuart’s blog for a video explanation.

Categories: Blogs

30% off O’Reilly’s Open Source Convention in Austin, May 16-19

Cory Doctorow - 2016, March 23 - 01:08

O’Reilly’s venerable, essential OSCON is in Austin, Texas this year, meaning that you’ll get to combine brain-thumpingly good talks and workshops of free/open source tools and techniques with some of the world’s best BBQ, millions of bats, my favorite toy store anywhere, and one of the best indie bookstores you could hope to visit.


I’m delivering one of this year’s keynotes, and I’ll certainly be doing all of the above!

The O’Reilly folks are offering 30% off your OSCON badge when you book with the offer code BOING30.


See you there!

OSCON is where all of the pieces come together: developers, innovators, businesspeople, and investors. In the early days, this trailblazing O’Reilly event was focused on changing mainstream business thinking and practices; today OSCON is about real-world practices and how to successfully implement open source in your workflow or projects. While the open source community has always been viewed as building the future—that future is here, and it’s everywhere you look. Since 1999, OSCON has been the best place on the planet to experience the open source ecosystem. At OSCON, you’ll find everything open source: languages, communities, best practices, products and services. Rather than focus on a single language or aspect, such as cloud computing, OSCON allows you to learn about and practice the entire range of open source technologies.

In keeping with its O’Reilly heritage, OSCON is a unique gathering where participants find inspiration, confront new challenges, share their expertise, renew bonds to community, make significant connections, and find ways to give back to the open source movement. The event has also become one of the most important venues to announce groundbreaking open source projects and products.

O’Reilly OSCON, May 16 – 19, 2016 in Austin, TX [O’Reilly Media]

(Image: OSCON 2015 Portland, O’Reilly Conferences)

Categories: Blogs

MATLAB image display – autoscaling values with imshow

Matlab Image processing blog - 2016, March 21 - 11:54

Last week I talked about displaying gray-scale and binary images. In that post, I showed how to control the grayscale range. For example, the call imshow(I,[0.4 0.6]) displays the matrix I as a gray-scale image so that the value 0.4 gets displayed as black, and the value 0.6 gets displayed as white.

Brett, a MathWorks application engineer and frequent File Exchange contributor, correctly pointed out that I neglected to discuss a common and useful syntax: imshow(I,[]). This syntax automatically determines the grayscale range of the display based on the minimum and maximum values of I. It is equivalent to imshow(I,[min(I(:)) max(I(:))]).

Here's an example. I have a matrix of terrain elevation values, in meters, near Mt. Monadnock in New Hampshire, USA. The peak of Mt. Monadnock is about 960 meters.

load mt_monadnock

The matrix is stored in the MAT-file using the variable name Zc.

whos Zc Name Size Bytes Class Attributes Zc 3042x3042 37015056 single min(Zc(:)) ans = 141.5285 max(Zc(:)) ans = 963.1366

To display these elevation values as a gray-scale image with autoscaling, just call imshow(Zc,[]):

imshow(Zc,[]) Warning: Image is too big to fit on screen; displaying at 33%

And, while we're here, I'll point out that you can install any colormap you like once a gray-scale image is displayed. Want some color?

colormap(parula) \n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

, a MathWorks application engineer and frequent File Exchange % contributor, correctly pointed out that I neglected to discuss a common % and useful syntax: |imshow(I,[])|. This syntax automatically determines % the grayscale range of the display based on the minimum and maximum values % of |I|. It is equivalent to |imshow(I,[min(I(:)) max(I(:))])|. % % Here's an example. I have a matrix of terrain elevation values, in meters, % near Mt. Monadnock in New Hampshire, USA. The peak of Mt. Monadnock is % about 960 meters. load mt_monadnock %% % The matrix is stored in the MAT-file using the variable name |Zc|. whos Zc %% min(Zc(:)) %% max(Zc(:)) %% % To display these elevation values as a gray-scale image with autoscaling, % just call |imshow(Zc,[])|: imshow(Zc,[]) %% % And, while we're here, I'll point out that you can install any colormap % you like once a gray-scale image is displayed. Want some color? colormap(parula) ##### SOURCE END ##### 832cb0a48d154e2f8430654c17a8fd85 -->

Categories: Blogs

Screw optimism, we need hope instead

Cory Doctorow - 2016, March 16 - 10:26


I wrote an essay called “Fuck Optimism” for a print project from F-Secure, about how we’ll make the Internet a 21st century electronic nervous system that serves humanity and stop it from being a tool to oppress, surveil and displace humans.


In honor of Digital Freedom Month, F-Secure and Little Atoms have republished it online.

Say that I believed that the Internet – presently treated by regulators as the world’s best video-on-demand service, or the world’s most perfect pornography distribution service, or the world’s finest jihadi recruiting tool – would be turned into the world’s greatest surveillance device.

WHAT WOULD I DO?

I would work to take back the Internet. To make crypto usable and robust. To spread free (as in “speech”, if not as in “beer”) and open software. To hold regulators to account on the matter of network neutrality, and to build alternative networks less susceptible to rent-seeking by venal cultists of the religion of fiscal responsibility over human decency.

In short, I would do every single thing I would do if I was *optimistic* about the Internet.

FUCK OPTIMISM. I WANT *HOPE*.

Hope is why you tread water if your ship sinks in the open sea: Not because you have any real chance of being picked up, but because everyone who was picked up kicked until the rescue came.

Kicking is a necessary (but insufficient) precondition for survival. There’s a special kind of hope: the desperate hope we have for people who are depending upon us. If your ship sinks in open water and your child can’t kick for herself, you’ll wrap her arms around your neck and kick twice as hard for both of you.

To quote the eminent sage and Saturday morning cartoon superhero The Tick: “Don’t destroy the Earth! That’s where I keep all my stuff!”

Cory Doctorow’s manifesto for hope
[Little Atoms]

Categories: Blogs

MATLAB image display – grayscale and binary images

Matlab Image processing blog - 2016, March 14 - 06:51

In my previous posts (February 9, February 22, and February 29), I discussed the truecolor and indexed image display models in MATLAB, as well as the direct and scaled variations of indexed display. The Image Processing Toolbox has conventions for two additional image display models: grayscale and binary. These conventions are used by the MATLAB image display function imshow, which originated in the Image Processing Toolbox.

ContentsGrayscale image display

If you pass a single argument to imshow, it interprets the input as a grayscale image. Here's an illustration using a simple sinusoid:

theta = linspace(0, 2*pi, 256); I = repmat((-cos(2*theta) + 1)/2, [256 1]); im = imshow(I);

As far as the MATLAB Graphics system is concerned, this is a scaled indexed image being displayed in a figure with a grayscale colormap installed. Here are the key properties that have been set to control the image display:

ax = gca; fig = gcf; im.CDataMapping ans = scaled ax.CLim ans = 0 1 map = fig.Colormap; map(1:5,:) ans = 0 0 0 0.0039 0.0039 0.0039 0.0078 0.0078 0.0078 0.0118 0.0118 0.0118 0.0157 0.0157 0.0157

The function imshow handles all these details for you.

Controlling the grayscale display range

Using imshow, you can override the conventional display range and specify your own black and white values. You do this by providing a second input argument, a two-element vector containing the black and white values. In the call to imshow below, 0.4 (and any lower value) gets displayed as black. The value 0.6 (and any higher value) gets displayed as white.

imshow(I, [0.4 0.6]) Binary image display

The other Image Processing Toolbox image display model is the binary image. If you supply a single input argument that is logical, then imshow (as well as many other toolbox functions) interpret that input as a binary image.

bw = imread('text.png'); islogical(bw) ans = 1 imshow(bw) \n'); d.write(code_string); // Add copyright line at the bottom if specified. if (copyright.length > 0) { d.writeln(''); d.writeln('%%'); if (copyright.length > 0) { d.writeln('% _' + copyright + '_'); } } d.write('\n'); d.title = title + ' (MATLAB code)'; d.close(); } -->


Get the MATLAB code (requires JavaScript)

Published with MATLAB® R2016a

, % , and % ), I discussed the truecolor and indexed image display models % in MATLAB, as well as the direct and scaled variations of indexed display. % The Image Processing Toolbox has conventions for two additional image % display models: grayscale and binary. These conventions are used by the % MATLAB image display function % , which originated in the Image Processing Toolbox. %% Grayscale image display % If you pass a single argument to , it interprets the input as a grayscale image. Here's an illustration % using a simple sinusoid: theta = linspace(0, 2*pi, 256); I = repmat((-cos(2*theta) + 1)/2, [256 1]); im = imshow(I); %% % As far as the MATLAB Graphics system is concerned, this is a scaled indexed image being % displayed in a figure with a grayscale colormap installed. Here are the key % properties that have been set to control the image display: ax = gca; fig = gcf; im.CDataMapping %% ax.CLim %% map = fig.Colormap; map(1:5,:) %% % The function |imshow| handles all these details for you. %% Controlling the grayscale display range % Using |imshow|, you can override the conventional display range and % specify your own black and white values. You do this by providing a second % input argument, a two-element vector containing the black and white % values. In the call to |imshow| below, 0.4 (and any lower value) gets % displayed as black. The value 0.6 (and any higher value) gets displayed as % white. imshow(I, [0.4 0.6]) %% Binary image display % The other Image Processing Toolbox image display model is the binary % image. If you supply a single input argument that is logical, then % |imshow| (as well as many other toolbox functions) interpret that input as % a binary image. bw = imread('text.png'); islogical(bw) %% % imshow(bw) ##### SOURCE END ##### 5223a001df3e45e8ad26544cea130afa -->

Categories: Blogs

The “Encryption” Debate

Steve Gibson - 2016, March 12 - 19:49

“Encryption” is quoted in the title of this essay because encryption is NOT what any of this is actually about. The debate is not about encryption, it’s about access. It should be called “The Device Access Debate” and encryption should have never been brought into it.

Modern smartphones have batteries, screens, memory, radios, encryption, and a bunch of other stuff. Collectively, they all make the phone go, they are all good, and we want as much of each them as the device’s manufacturer can squeeze in. We do not want smaller batteries, lower resolution screens, less memory, less capable radios, or weaker encryption. And it is entirely proper for Apple to boast about the battery life, screen resolution, memory, radio, and encryption strength of their products. The FBI is entirely correct when it says that Apple is actively marketing the newly increased encryption strength of their latest phones and operating systems. That’s as is should be, in the same way that Apple is marketing their device’s battery life and screen resolution. Those are all features of modern smartphones, and Apple knows what their users want. We all want those things.

The fourth amendment to the US Constitution states: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

The 4th amendment is about managing access. It does not provide that under no circumstances, ever, can duly authorized law enforcement officials enter someone’s home. It provides a managed and monitored mechanism―a compromise―between the privacy rights of the individual and the needs of the citizenry who surround that person. And it is under this 4th amendment that US citizens have enjoyed the balanced guarantee that their home is their castle and that only a lawfully issued search warrant, meeting the test of probable cause, would allow law enforcement authorities a legal right to enter.

Mapping the 4th amendment onto encrypted devices:
Without weakening their devices’ encryption, Apple could arrange to be able to respond to court orders in the United States. If Apple wished to be able to respond to lawful search warrants to unlock their devices, they could embed a single, randomly derived, high-entropy (256-bit) unique per-device key in the hardware secure enclave of every device. This key would not be derived from any formula or algorithm, so there would be no master secret that might somehow escape or become known to a malicious agency. It would be truly random and far too lengthy for any possible brute force guessing attack to be feasible. Upon embedding each individual random per-device key, Apple would securely store a copy of that key in their own master key vault. In this way, without sacrificing anyone’s security, only Apple, on a device by device basis, could unlock any one of their own devices.

This might appear to place an undue burden upon Apple. But this, too, seems balanced. Apple is, as the FBI correctly observed, obtaining great marketing value from the strength of their security technology. It is understandable that Apple would rather not be able to respond to court orders to unlock their devices. But this attitude is not in keeping with constitutional precedent.

Users of Apple’s products would know that our devices sport the latest and greatest strongest encryption, making them utterly impervious to any attacker, hacker, border official, local or foreign government. And that as with the interiors of our homes, only in accordance with due legal process, and Apple’s continuing assistance, could our device be unlocked in compliance with a search warrant. And if, at any time in the future, Apple decided this was the wrong decision, they could destroy their vault of per-device unlocking keys and we would be no worse off than we are today.

Although the perfect math of encryption does provide for absolute privacy, we all know that privacy can be horribly abused and used against the greater public welfare. The founders of the United States, whom many revere, understood this well. Apple should too.

People who intend to comment: Please recognize that I understand there are many additional subtleties, such as handling the demands of foreign authorities. It is probably the case that the applicable laws of each country should be honored. This essay intended only to clarify the confusion between encryption and access, and the scheme I have proposed is sufficiently flexible to accommodate any specific access policy Apple might choose and/or change as needed.

 

Follow-up, 20 hours later:
I wrote this post to separate the issue of encryption strength from access policy. Much ink and angst has been expended over phrases involving “backdoors” and “weakened encryption.” All such concerns are red herrings because unbreakable encryption simply gives Apple unbreakable access control. Apple could design a completely secure facility to manage unlocking individual devices. Whether Apple should do so is deservedly one of the most critical questions of our time, and is worthy of truly engaging debate. If we decide that we want to leave things as they are, that’s fine. But we should not conflate whatever policy Apple implements with their user’s security. We can have both.


Categories: Blogs