Compiling ZoneMinder with libjpeg-turbo and JPEG_LIB_VERSION error

If you’ve ever tried to build ZoneMinder from source and been frustrated by the following compile error, then I hope this helps you.

[ 30%] Building CXX object src/CMakeFiles/zm.dir/zm_image.cpp.o
 /u1/src/ZoneMinder/src/zm_image.cpp: In member function ‘bool Image::ReadRaw(const char*)’:
 /u1/src/ZoneMinder/src/zm_image.cpp:616:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
 /u1/src/ZoneMinder/src/zm_image.cpp: In member function ‘bool Image::ReadJpeg(const char*, unsigned int, unsigned int)’:
 /u1/src/ZoneMinder/src/zm_image.cpp:664:5: error: ‘JPEG_LIB_VERSION’ was not declared in this scope
 /u1/src/ZoneMinder/src/zm_image.cpp: In member function ‘bool Image::WriteJpeg(const char*, int, timeval) const’:
 /u1/src/ZoneMinder/src/zm_image.cpp:825:5: error: ‘JPEG_LIB_VERSION’ was not declared in this scope
 /u1/src/ZoneMinder/src/zm_image.cpp: In member function ‘bool Image::DecodeJpeg(const JOCTET*, int, unsigned int, unsigned int)’:
 /u1/src/ZoneMinder/src/zm_image.cpp:956:5: error: ‘JPEG_LIB_VERSION’ was not declared in this scope
 /u1/src/ZoneMinder/src/zm_image.cpp: In member function ‘bool Image::EncodeJpeg(JOCTET*, int*, int) const’:
 /u1/src/ZoneMinder/src/zm_image.cpp:1090:5: error: ‘JPEG_LIB_VERSION’ was not declared in this scope
 make[2]: *** [src/CMakeFiles/zm.dir/zm_image.cpp.o] Error 1
 make[1]: *** [src/CMakeFiles/zm.dir/all] Error 2
 make: *** [all] Error 2

jpeglib-turbo/include/jconfig.h (from your installed jpeglib-turbo-dev) has:

#define JPEG_LIB_VERSION 62

And to avoid including this more than once the wrapper libjpeg-turbo/include/jpeglib.h has:

#ifndef JCONFIG_INCLUDED /* in case jinclude.h already did */
#include "jconfig.h" /* widely used configuration options */
#endif

So far so good… The problem comes when zoneminder wants to compile zm_image.cpp

the include path is:

zm_image.cpp
 -> zm_image.h
   -> zm_jpeg.h
     -> jinclude.h
       -> jconfig.h   (from jpeglib-turbo)
     -> jpeglib.h     (from jpeglib-turbo)

Now, this *should* all work properly! I suspect it has something to do with cmake and scope and by the time we get back to jinclude.h it no longer has the define.

My fix was to take the version from my jpeglib-turbo/include/jconfig.h and simply add it to zoneminder’s jinclude.h:

#include "jconfig.h" /* auto configuration options */
#define JCONFIG_INCLUDED /* so that jpeglib.h doesn't do it again */

#ifndef JPEG_LIB_VERSION
#define JPEG_LIB_VERSION 62
#endif

Add the 3 blue lines after the jconfig.h include. Once done, compile should complete properly.

Hope this helps other get past this hurdle.

 

 

Amazon Prime isn’t all that fast

I’ve decided to build myself a new computer. I haven’t done this in about 10 years, so there was a fair amount of reading and research to understand all the latest components, processors, chipsets, and compatible RAM.

Because this is likely going to have to last me a while, I wanted to go big. Very big. More on that in the next article as I intend on documenting the build.

I also wanted to buy all of the components from Amazon due to Amazon’s excellent customer service should anything go wrong.

On Friday 5th May 2014, I started ordering. Amazon offered me a free trial of Amazon Prime during checkout.
Excellent, I’ll get everything faster. Well that’s the theory anyway.

It is now Wednesday 10th September (evening) and I have 4 of the 6 items delivered. Everything was in stock when I placed the order. Why aren’t they here yet?

The CPU (Devil’s Canyon i7-4790K) was ordered from Ebuyer because Amazon had no stock and was saying 2-4 weeks, so I sucked up Ebuyer’s £8 delivery charge to Northern Ireland. It arrived the next day. Well done Ebuyer.

Amazon Prime, however, I don’t think will be having the trial converted to a purchase.

domainadmin.com phishing spam email that isn’t.

Registered a new domain name this evening and very quickly received what looked very much like a phishing email from domainadmin.com

The email itself:

> From: PGregg [mailto:xxx@example.com]

> Sent: 24 June 2014 23:26

> To: xxx@example.com

> Subject: VERIFICATION REQUIRED – Please verify your domain name(s) as soon as possible

 

Greetings,

Please read this important e-mail carefully.

Recently you registered, transferred or modified the contact information for one or more of your domain name(s). As of January 1, 2014, ICANN requires all accredited registrars to verify your new contact information.

You can read about ICANN’s new policy at: http://www.icann.org/en/resources/registrars/raa/approved-with-specs-27jun13-en.htm#whois-accuracy.

newdomainname.com

In order to ensure your domain name remain active, you must now click the following link and follow the instructions provided:

http://approve.domainadmin.com/registrant/?verification_id=1234567&key=abcdefg&rid=12345

Failure to follow the above link and complete this process will eventually lead to the suspension of your domain name(s).

If you have any questions, please contact us.

Sincerely,

PGregg

Turns out this is actually legitimate.  Posting this in case others wonder the same and google happens to direct them to this page.

 

 

Menshn stats and where they came from.

You may have noticed, if you have been following my twitter feed, that I have been posting some Menshn statistics recently. You may also be wondering how I came by these numbers.

 

  Someone sent me a message on twitter pointing me to the URL: menshn.com/data/chat.php (which shall remain unclickable for reasons that will become apparent).  This web page basically dumps the last 20-30k “menshns” out in a semi-structured html data format.  In total (at time of writing) it dumps 31MB of data. So you can see why I’m not making it a link. I’ve no desire to overload their systems.

Upon looking at the “View source” on the menshn.com homepage, it seems that they use this to back end the automatically updating feed on their homepage.  

If you watch the traffic generated by your browser – you can see it making a request every 4 seconds for https://menshn.com/data/chat.php?roomid=*&lastid=73405

So, now we know where my source got the link from – seems if you don’t supply any arguments, it just dumps everything it has. And so, with such a dataset we are able to do some metrics.

First up, I parsed all the data out to produce a simple ID,Room,Name,Message text file – just to prove to myself that I had understood the data set and was parsing it correctly.

Next, I built into the parser, metric building. Count the unique users, count number of posts/menshns, count number of rooms/topics, etc.

From this I have the top line information: 

Number of active users: 218
Number of active rooms: 224

Breaking this down further to “Top 20” lists, I get:

20 Most prolific users:
 5752 janemcqueen
 3240 CosensV
 2019 Chriss
 2011 BlackAdder
 1569 PoliticsBlogorguk
 1520 Xlibris
 1106 DavidX
 783 JOSHBHJ
 782 Louise
 717 EdenFisher
 704 JayMcNeil
 666 Grist
 588 TinderWall
 401 RV
 384 Bozier
 373 jeanprytyskacz
 348 MikeARPowell
 285 Silaz
 251 Rabbs
 239 Europe

And

20 Busiest rooms:
 6361 //ukpolitics
 3216 //gaymarriage
 1252 //religion
 1014 //assangecase
 877 //olympics2012
 717 //judaism
 673 //uselection
 663 //atheism
 642 //mormonism
 585 //davidcameron
 527 //civilliberty
 479 //reshuffle
 474 //mittromney
 415 //corbyelectio
 394 //capitalism
 315 //twitter
 295 //falklands
 224 //louisemensch
 208 //philosophy
 204 //catholicism

Growth metrics are easily obtained by performing the same test at different times. In my case, they were 3.5 days apart. Leading to the conclusion posted on twitter:  

 

If you really want to see all the menshns, rather than overload the menshn server – you can obtain my parsed analysis of the dump at http://pgregg.com/test/menshn/menshnchat.txt

I’d welcome comments on this. For the record – none of this information was obtained via a “hack” and no illegal acts were committed in the gathering of this information.

 

 

Menshn: Another password design flaw

Ok – so I forgot my password on Menshn, again, and went to reset my password. Normal email address+token thing – except I noticed another problem.

Menshn emails you a link in the form:

pwreset.php?e=email@address.com&c=8chartoken

At least they are not emailing plain text passwords again. But, I noticed that the token link can be used both multiple times, and it does not expire.

Requesting a new token to be emailed to you invalidates earlier tokens – however it remains the case that the most recent pwreset token stays valid.

Ooops. Bad Menshn, bad. Back to the naughty corner for you.

At least clear the stored token when the user uses it once (and ensure you don’t accept blank tokens).

Menshn DNS is a (technical thingy).

So Menshn changed their DNS and stopped their site working for a number of users.

Users pointed it out and Menshn did what Menshn does and blamed everyone else but themselves. I call it the Apple Defence. Or #You’reHoldingItWrong.

What Louise probably doesn’t know is that whoever is advising her*, plainly doesn’t know the first, or last, thing about DNS.

*assuming she has an advisor, perhaps Bozier, as no geek worth his (or her) salt will ever say “technical thingy”.

No Louise, DNS migration does not take 24 hours. It is not the fault of the other ISPs. It is your own fault.

Now Louise and Bozier have both blocked me on twitter, but I’m a magnanimous chap – in the words of Sid [Ice Age] “I’m too lazy to hold a grudge” – so I’ll tell them how to fix it next time.

DNS records have this little number attached to them called a TTL – or Time To Live. Normally the domain TTL is 86400 seconds, or, as you’ve found, 24 hours. This number is entirely within your control. It is the number *you* give to other ISPs when they ask for your zone information. So when their systems receive that data, they can, rightly, assume that the data is good for the next 24 hours.

Thus, when you are planning a domain/DNS change – what do you do? You lower the number to an acceptable outage window, e.g. 60 seconds on your original DNS zone(s) servers. Further, you need to do this at least 24 hours in advance of the change to allow the existing longer TTL records out there to expire.

Thus when you switch DNS servers, or server IPs, your maximum outage window is the new lower TTL.

Welcome to the Internet. It’s a technical thingy.

What 16TB raw space looks like at home

I’ve been looking for some home backup solutions over the past couple of months. This has led me down both the do-it-yourself route and buying a ready-made solution.

One of my requirements was that I wanted the solution to be more than just storage – otherwise I would have purchased a straight NAS box from the likes of Qnap, Netgear or if feeling rich Drobo. Most of these dedicated NAS boxes can be “rooted” to allow ssh access , however their CPUs are generally underpowered for general purpose use.

Other requirements were that I wanted a reasonably small form factor and to be able to use at least 4 SATA hard drives, preferably with hot swap ability. Hardware raid was not a requirement because I intended on using a Linux distribution with mdadm software raid.

In the end, I ended up building two boxes.
The first, a home build, based on the CFI A7879 chassis CFI_A7879_1with a Gigabyte GA-D525TUD Dual Core Atom Mini-ITX Board.

GA-D525TUD

The second was a off-the-shelf HP ProLiant Microserver which, to be brutally honest, was because HP were offering £100 cashback deal on it. This made the server much cheaper than you could possibly build yourself from components.

HP_Microserver
I added 4GB ram to each box (total 5GB in the HP box because it comes with 1GB).The CFI boot drive is a 8GB (30MB/sec) CompactFlash card mounted as an
IDE drive. The HP boot drive is a 16GB Sandisk Cruzer USB stick.

Finally added 4 x 2TB Samsung F4EG HD204UI drives to each box.

The CFI box has 8TB in RAID5 providing 5.4TB usable. The HP has 8TB in RAID6 providing 3.6TB usable space.

If there is more interest, I’ll write up the build process is more detail with pictures.

For now – here are some shots of my utility shelf.

IMG_20110411_173250

IMG_20110411_173740

 

 

 

VMware releases ESXi for free

I totally missed this one until a few days ago, but VMware has released the ESXi Hypervisor free of charge.   They obviously see the pending challenge from Microsoft, Xen and Virtualbox and are hoping to gain traction and mindshare in the community – but I have one piece of advice for VMware.

If you want to regain the "developer" mindshare – those evangelists that sponsor VMware in their corporations – then restore the VMTN Subscription.

VMTN was my affordable way in to VMware – and because of that and my persistence in my current workplace, VMware now has over 20 ESX Enterprise license sales.

Script to generate a list of valid email recipients from a qmail setup

Last week I set-up a Postfix+MailScanner+ClamAV anti-spam and anti-virus mail relay server. Testing seemed all good, except that it was scanning lots of bogus email addresses, e.g. to nosuchuser@pgregg.com

Postfix provides a relay_recipients file (at least thats what the MailScanner setup called it) where you specify the specific email addresses that you are prepared to accept email for.

In the old days we used SMTP VRFY – which people dropped because it was a way to verify good email addresses and clean spam lists.   However, by dropping it it seems the spammers just ignored cleaning and just blast out to any and all email address they could find.  The irony being that the problems are now worse because we are constantly being bombarded by spam to bogus addresses.

As my primary email system is (still) qmail I needed a way to build a list of valid emails that qmail would accept – so I set about writing a perl script that would process the control/virtualdomains users/assign and dot-qmail files in the same way that qmail would.

The result is here:
https://www.pgregg.com/projects/qmail/makevalidrecipients/MakeValidRecipientsList.pl

Feel free to make use of the script – hopefully it can help others too.   Note that it doesn’t handle ~alias users, nor if you have a database backed system – but manual and vpopmail setups should be just fine.  No warranty implied or given though :) Use at your own risk.

Once I added the relay_recipients file to the postfix relay and waited a few days, awstats reported that 99.8% of all my email was to bogus addresses – wow!  That is a massive saving on CPU (antispam/av scanning) and traffic.

Enjoy.

Release: vmclone.pl for VMware ESX Server

I have released a script, vmclone.pl, to assist in the cloning of full Virtual Machines within an ESX Server box.  This came about because of a gap in functionality between replicating individual hard disks and the clone option in the VI client that was mostly missing from VMs.

The tool will replicate and rename all the files in a VM with a single command line execution and optionally allows you to tweak (using regex) some of the options such as changing the memory size of a VM.

The tool is available here: http://www.pgregg.com/projects/vmclone/

I would appreciate any feedback or suggestions on it.

Thanks.

All content © Paul Gregg, 1994 - 2024
This site http://pgregg.com has been online since 5th October 2000
Previous websites live at various URLs since 1994