My time on the IBM Linux Impact Team, and legacy

My time on the IBM Linux Impact Team, and legacy

In this extensive article, Jon “MadDog” delves into the behind-the-scenes narrative of how Linux and Open Source gained acceptance within the corporate sphere, eventually establishing itself as the dominant platform in today’s enterprise information technology. It has become the operating system powering contemporary cloud infrastructure and, most notably, has transformed into the primary methodology for driving software innovation.

Read More
Command Line in Windows

Command Line in Windows

Command line on Windows (10+) nowadays doesn’t have to be only PuTTY to a remote Linux machine. In fact many Linux concepts were incorporated on Windows.

Windows Subsystem for Linux

First, activate WSL. Since I enjoy using Fedora, and not Ubuntu, this guide by Jonathan Bowman has helped me to set WSL exactly as I like. The guide points to some old Fedora images, so pay attention to its links to get a newer one. Then, the guide also explains how to initialize the Fedora image, customize it as default, configure your user etc.

Windows native SSH clients

Yes, it has tools from OpenSSH, such as the plain ssh client, ssh-agent and others. No need for PuTTY.

This guide by Chris Hastie explains how to activate SSH Agent with your private key. I’m not sure it is fairly complete, since I didn’t test yet if it adds your key in session startup for a complete password-less experience. I’m still trying.

Basically, you need to activate a Windows service and have your private key in $HOME\.ssh\id_rsa, exactly like under Linux.

Windows Terminal

The old command prompt is very limited, as we know, and obsolete. Luckily, Microsoft has released a new, much improved, Terminal application that can be installed from the Store. On Windows 11, the Terminal app is already there for you.

Command Line in Windows

It allows defining sessions with custom commands as wsl (to get into the Fedora WSL container installed above), cmd, ssh. I use tmux in all Linux computers that I connect, so my default access command is:

ssh -l USERNAME -A -t HOSTNAME "tmux new-session -s default -n default -P -A -D"

Windows Terminal app is highly customizable, with colors and icons. And this repo by Mark Badolato contains a great number of terminal color schemes. Select a few from the windowsterminal folder and paste their JSON snippet into the file %HOME%\AppData\Local\Packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\LocalState\settings.json.

Jupyter and Data Science on a Mac (without Anaconda)

macOS Catalina doesn’t ship with Python 3, only 2. But you can still get 3 from Apple, updated regularly through system’s official update methods. You don’t need to get the awful Anaconda on you Mac to play with Python.

Python 3 is shipped by Xcode Command Line Tools. To get it installed (without the heavy Xcode GUI), type this in your terminal:

xcode-select --install

This way, every time Apple releases an update, you’ll get it.

Settings window will pop so wait 5 minutes for the installation to finish.

If you already have complete Xcode installed, this step was unnecessary (you already had Python 3 installed) and you can continue to the next section of the tutorial.

Clean Old Python Modules

In case you already have Python installed under your user and modules downloaded with pip, remove it:

rm -rf ${HOME}/Caches/com.apple.python/${HOME}/Library/Python \
${HOME}/Library/Python/ \
${HOME}/Library/Caches/pip

Install Python Modules

Now that you get a useful Python 3 installation, use pip3 to install Python modules that you’ll need. Don’t forget to use –user to get things installed on your home folder so you won’t pollute your overall system. For my personal use, I need the complete machine learning, data wrangling and Jupyter suite:

pip3 install --user sqlalchemy
pip3 install --user matplotlib
pip3 install --user pandas
pip3 install --user jupyterlab
pip3 install --user PyMySQL
pip3 install --user configobj
pip3 install --user requests
pip3 install --user seaborn
pip3 install --user bs4
pip3 install --user xgboost
pip3 install --user scikit_learn

But you might need other things as Django or other sqlalchemy drivers. Set yourself at home and install them with pip3.

For modules that require compilation and special library, say crypto, do it like this:

CFLAGS="-I/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/include" \
LDFLAGS="-L/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib" \
pip3 install --user pycrypto

Use Correct Python 3 Binary

For some reason, Apple installs many different Python 3 binaries in different places of the system. The one that gets installed on /usr/bin/python3 has problems loading some libraries and instrumentation with install_name_tool would be required. So lets just use the binary that works better:

export PATH=/Library/Developer/CommandLineTools/usr/bin:$PATH

Run Jupyter Lab on your Mac

Commands installed by pip3 will be available in the ~/Library/Python/3.7/bin/ folder, so just add it to your PATH:

export PATH=$PATH:~/Library/Python/3.7/bin/

Now I can simply type jupyter-lab anywhere in the terminal or command line to make it fire my browser and get a Jupyter environment.

More about Xcode Command Line Tools

Xcode Command Line Tools will get you a full hand of other useful developer tools, such as git, subversion, GCC and LLVM compilers and linkers, make, m4 and a complete Python 3 distribution. You can see most of its installation on /Library/Developer/CommandLineTools folder.

For production and high end processing I’ll still use Python on Linux with my preferred distribution’s default packages (no Anaconda). But this method of getting Python on macOS is fastest and cleanest to get you going on your own data scientist laptop without a VM nor a container.

Jupyter Notebook on Fedora with official packages and SSL

Jupyter Notebooks are the elegant way that Data Scientists work and all software needed to run them are already pre-packaged on Fedora (and any other Linux distribution). It is encouraged to use your distribution’s packaging infrastructure to install Python packages. Avoid at any cost installing Python packages with pip, conda, anaconda and from source code. The reasons for this good practice are security, ease of use, to keep the system clean and to make installation procedures easily reproducible in DevOps scenarios.

Jupyter Notebook on Fedora with MathJax and Python
Jupyter Notebook on Fedora with MathJax and Python

Read More

The difference between Open Source and Open Governance

When Sun and then Oracle bought MySQL AB, the company behind the original development, MySQL open source database development governance gradually closed. Now, only Oracle writes updates. Updates from other sources — individuals or other companies — are ignored. MySQL is still open source, but it has a closed governance.

MySQL is one of the most popular databases in the world. Every WordPress and Drupal website runs on top of MySQL, as well as the majority of generic Ruby, Django, Flask and PHP apps which have MySQL as their database of choice.

When an open source project becomes this popular and essential, we say it is gaining momentum. MySQL is so popular that it is bigger than its creators. In practical terms, that means its creators can disappear and the community will take over the project and continue its evolution. It also means the software is solid, support is abundant and local, sometimes a commodity or even free.

In the case of MySQL, the source code was forked by the community, and the MariaDB project started from there. Nowadays, when somebody says he is “using MySQL”, he is in fact probably using MariaDB, which has evolved from where MySQL stopped in time.

Open source vs. open governance

Open source software’s momentum serves as a powerful insurance policy for the investment of time and resources an individual or enterprise user will put into it. This is the true benefit behind Linux as an operating system, Samba as a file server, Apache HTTPD as a web server, Hadoop, Docker, MongoDB, PHP, Python, JQuery, Bootstrap and other hyper-essential open source projects, each on its own level of the stack. Open source momentum is the safe antidote to technology lock-in. Having learned that lesson over the last decade, enterprises are now looking for the new functionalities that are gaining momentum: cloud management software, big data, analytics, integration middleware and application frameworks.

On the open domain, the only two non-functional things that matter in the long term are whether it is open source and if it has attained momentum in the community and industry. None of this is related to how the software is being written, but this is exactly what open governance is concerned with: the how.

Open source governance is the policy that promotes a democratic approach to participating in the development and strategic direction of a specific open source project. It is an effective strategy to attract developers and IT industry players to a single open source project with the objective of attaining momentum faster. It looks to avoid community fragmentation and ensure the commitment of IT industry players.

The value of momentum

Open governance alone does not guarantee that the software will be good, popular or useful (though formal open governance only happens on projects that have already captured some attention of IT industry leaders). A few examples of open source projects that have formal open governance are CloudFoundry, OpenStack, JQuery and all the projects under the Apache Software Foundation umbrella.

For users, the indirect benefit of open governance is only related to the speed the open source project reaches momentum and high popularity.

Open governance is important only for the people looking to govern or contribute. If you just want to use, open source momentum is far more important.

Also published on Thoughts on Cloud.

Also published on LinkedIn.

Advanced Multimedia on the Linux Command Line

There was a time that Apple macOS was the best platform to handle multimedia (audio, image, video). This might be still true in the GUI space. But Linux presents a much wider range of possibilities when you go to the command line, specially if you want to:

  • Process hundreds or thousands of files at once
  • Same as above, organized in many folders while keeping the folder structure
  • Same as above but with much fine grained options, including lossless processing, pixel perfectness that most GUI tools won’t give you

The Open Source community has produced state of the art command line tools as ffmpeg, exiftool and others, which I use every day to do non-trivial things, along with Shell advanced scripting. Sure, you can get these tools installed on Mac or Windows, and you can even use almost all these recipes on these platforms, but Linux is the native platform for these tools, and easier to get the environment ready.

These are my personal notes and I encourage you to understand each step of the recipes and adapt to your workflows. It is organized in Audio, Video and Image+Photo sections.

I use Fedora Linux and I mention Fedora package names to be installed. You can easily find same packages on your Ubuntu, Debian, Gentoo etc, and use these same recipes.

Audio

Show information (tags, bitrate etc) about a multimedia file

ffprobe file.mp3
ffprobe file.m4v
ffprobe file.mkv

Lossless conversion of all FLAC files into more compatible, but still Open Source, ALAC

ls *flac | while read f; do
	ffmpeg -i "$f" -acodec alac -vn "${f[@]/%flac/m4a}" < /dev/null;
done

Convert all FLAC files into 192kbps MP3

ls *flac | while read f; do
   ffmpeg -i "$f" -qscale:a 2 -vn "${f[@]/%flac/mp3}" < /dev/null;
done

Convert all FLAC files into ~256kbps VBR AAC with Fraunhofer AAC encoder

First, make sure you have Negativo17 build of FFMPEG, so run this as root:

dnf config-manager --add-repo=http://negativo17.org/repos/fedora-multimedia.repo
dnf update ffmpeg

Now encode:

ls *flac | while read f; do
   ffmpeg -i "$f" -vn -c:a libfdk_aac -vbr 5 -movflags +faststart "${f[@]/%flac/m4a}" < /dev/null;
done

Has been said the Fraunhofer AAC library can’t be legally linked to ffmpeg due to license terms violation. In addition, ffmpeg’s default AAC encoder has been improved and is almost as good as Fraunhofer’s, specially for constant bit rate compression. In this case, this is the command:

ls *flac | while read f; do
   ffmpeg -i "$f" -vn -c:a aac -b:a 256k -movflags +faststart "${f[@]/%flac/m4a}" < /dev/null;
done

Same as above but under a complex directory structure

This is one of my favorites, extremely powerful. Very useful when you get a Hi-Fi, complete but useless WMA-Lossless collection and need to convert it losslesslly to something more portable, ALAC in this case. Change the FMT=flac to FMT=wav or FMT=wma (only when it is WMA-Lossless) to match your source files. Don’t forget to tag the generated files.

FMT=flac
# Create identical directory structure under new "alac" folder
find . -type d | while read d; do
   mkdir -p "alac/$d"
done

find . -name "*$FMT" | sort | while read f; do
   ffmpeg -i "$f" -acodec alac -vn "alac/${f[@]/%$FMT/m4a}" < /dev/null;
   mp4tags -E "Deezer lossless files (https://github.com/Ghostfly/deezDL) + 'ffmpeg -acodec alac'" "alac/${f[@]/%$FMT/m4a}";
done

Embed lyrics into M4A files

iPhone and iPod music player can display the file’s embedded lyrics and this is a cool feature. There are several ways to get lyrics into your music files. If you download music from Deezer using SMLoadr, you’ll get files with embedded lyrics. Then, the FLAC to ALAC process above will correctly transport the lyrics to the M4A container. Another method is to use beets music tagger and one of its plugins, though it is very slow for beets to fetch lyrics of entire albums from the Internet.

The third method is manual. Let lyrics.txt be a text file with your lyrics. To tag it into your music.m4a, just do this:

mp4tags -L "$(cat lyrics.txt)" music.m4a

And then check to see the embedded lyrics:

ffprobe music.m4a 2>&1 | less

Convert APE+CUE, FLAC+CUE, WAV+CUE album-on-a-file into a one file per track ALAC or MP3

If some of your friends has the horrible tendency to commit this crime and rip CDs as 1 file for entire CD, there is an automation to fix it. APE is the most difficult and this is what I’ll show. FLAC and WAV are shortcuts of this method.

  1. Make a lossless conversion of the APE file into something more manageable, as WAV:
    ffmpeg -i audio-cd.ape audio-cd.wav
  2. Now the magic: use the metadata on the CUE file to split the single file into separate tracks, renaming them accordingly. You’ll need the shnplit command, available in the shntool package on Fedora (to install: yum install shntool). Additionally, CUE files usually use ISO-8859-1 (Latin1) charset and a conversion to Unicode (UTF-8) is required:
    iconv -f Latin1 -t UTF-8 audio-cd.cue | shnsplit -t "%n · %p ♫ %t" audio-cd.wav
  3. Now you have a series of nicely named WAV files, one per CD track. Lets convert them into lossless ALAC using one of the above recipes:
    ls *wav | while read f; do
       ffmpeg -i "$f" -acodec alac -vn "${f[@]/%wav/m4a}" < /dev/null;
    done

    This will get you lossless ALAC files converted from the intermediary WAV files. You can also convert them into FLAC or MP3 using variations of the above recipes.

Now the files are ready for your tagger.

Video

Add chapters and soft subtitles from SRT file to M4V/MP4 movie

This is a lossless and fast process, chapters and subtitles are added as tags and streams to the file; audio and video streams are not reencoded.

  1. Make sure your SRT file is UTF-8 encoded:
    bash$ file subtitles_file.srt
    subtitles_file.srt: ISO-8859 text, with CRLF line terminators
    

    It is not UTF-8 encoded, it is some ISO-8859 variant, which I need to know to correctly convert it. My example uses a Brazilian Portuguese subtitle file, which I know is ISO-8859-15 (latin1) encoded because most latin scripts use this encoding.

  2. Lets convert it to UTF-8:
    bash$ iconv -f latin1 -t utf8 subtitles_file.srt > subtitles_file_utf8.srt
    bash$ file subtitles_file_utf8.srt
    subtitles_file_utf8.srt: UTF-8 Unicode text, with CRLF line terminators
    
  3. Check chapters file:
    bash$ cat chapters.txt
    CHAPTER01=00:00:00.000
    CHAPTER01NAME=Chapter 1
    CHAPTER02=00:04:31.605
    CHAPTER02NAME=Chapter 2
    CHAPTER03=00:12:52.063
    CHAPTER03NAME=Chapter 3
    …
    
  4. Now we are ready to add them all to the movie along with setting the movie name and embedding a cover image to ensure the movie looks nice on your media player list of content. Note that this process will write the movie file in place, will not create another file, so make a backup of your movie while you are learning:
    MP4Box -ipod \
           -itags 'track=The Movie Name:cover=cover.jpg' \
           -add 'subtitles_file_utf8.srt:lang=por' \
           -chap 'chapters.txt:lang=eng' \
           movie.mp4
    

The MP4Box command is part of GPac.

OpenSubtitles.org has a large collection of subtitles in many languages and you can search its database with the IMDB ID of the movie. And ChapterDB has the same for chapters files.

Add cover image and other metadata to a movie file

Since iTunes can tag and beautify your movie files in Windows and Mac, libmp4v2 can do the same on Linux. Here we’ll use it to add the movie cover image we downloaded from IMDB along with some movie metadata for Woody Allen’s 2011 movie Midnight in Paris:

mp4tags -H 1 -i movie -y 2011 -a "Woody Allen" -s "Midnight in Paris" -m "While on a trip to Paris with his..." "Midnight in Paris.m4v"
mp4art -k -z --add cover.jpg "Midnight in Paris.m4v"

This way the movie file will look good and in the correct place when transferred to your iPod/iPad/iPhone.

Of course, make sure the right package is installed first:

dnf install libmp4v2

File extensions MOV, MP4, M4V, M4A are the same format from the ISO MPEG-4 standard. They have different names just to give a hint to the user about what they carry.

Decrypt and rip a DVD the loss less way

  1. Make sure you have the RPMFusion and the Negativo17 repos configured
  2. Install libdvdcss and vobcopy
    dnf -y install libdvdcss vobcopy
  3. Mount the DVD and rip it, has to be done as root
    mount /dev/sr0 /mnt/dvd;
    cd /target/folder;
    vobcopy -m /mnt/dvd .

You’ll get a directory tree with decrypted VOB and BUP files. You can generate an ISO file from them or, much more practical, use HandBrake to convert the DVD titles into MP4/M4V (more compatible with wide range of devices) or MKV/WEBM files.

Convert 240fps video into 30fps slow motion, the loss-less way

Modern iPhones can record videos at 240 or 120fps so when you’ll watch them at 30fps they’ll look slow-motion. But regular players will play them at 240 or 120fps, hiding the slo-mo effect.

We’ll need to handle audio and video in different ways. The video FPS fix from 240 to 30 is loss less, the audio stretching is lossy.

# make sure you have the right packages installed
dnf install mkvtoolnix sox gpac faac
#!/bin/bash

# Script by Avi Alkalay
# Freely distributable

f="$1"
ofps=30
noext=${f%.*}
ext=${f##*.}

# Get original video frame rate
ifps=`ffprobe -v error -select_streams v:0 -show_entries stream=r_frame_rate -of default=noprint_wrappers=1:nokey=1 "$f" < /dev/null  | sed -e 's|/1||'`
echo

# exit if not high frame rate
[[ "$ifps" -ne 120 ]] && [[ "$ifps" -ne 240 ]] && exit

fpsRate=$((ifps/ofps))
fpsRateInv=`awk "BEGIN {print $ofps/$ifps}"`

# loss less video conversion into 30fps through repackaging into MKV
mkvmerge -d 0 -A -S -T \
	--default-duration 0:${ofps}fps \
	"$f" -o "v$noext.mkv"

# loss less repack from MKV to MP4
ffmpeg -loglevel quiet -i "v$noext.mkv" -vcodec copy "v$noext.mp4"
echo

# extract subtitles, if original movie has it
ffmpeg -loglevel quiet -i "$f" "s$noext.srt"
echo

# resync subtitles using similar method with mkvmerge
mkvmerge --sync "0:0,${fpsRate}" "s$noext.srt" -o "s$noext.mkv"

# get simple synced SRT file
rm "s$noext.srt"
ffmpeg -i "s$noext.mkv" "s$noext.srt"

# remove undesired formating from subtitles
sed -i -e 's|<font size="8"><font face="Helvetica">\(.*\)</font></font>|\1|' "s$noext.srt"

# extract audio to WAV format
ffmpeg -loglevel quiet -i "$f" "$noext.wav"

# make audio longer based on ratio of input and output framerates
sox "$noext.wav" "a$noext.wav" speed $fpsRateInv

# lossy stretched audio conversion back into AAC (M4A) 64kbps (because we know the original audio was mono 64kbps)
faac -q 200 -w -s --artist a "a$noext.wav"

# repack stretched audio and video into original file while removing the original audio and video tracks
cp "$f" "${noext}-slow.${ext}"
MP4Box -ipod -rem 1 -rem 2 -rem 3 -add "v$noext.mp4" -add "a$noext.m4a" -add "s$noext.srt" "${noext}-slow.${ext}"

# remove temporary files 
rm -f "$noext.wav" "a$noext.wav" "v$noext.mkv" "v$noext.mp4" "a$noext.m4a" "s$noext.srt" "s$noext.mkv"

1 Photo + 1 Song = 1 Movie

If the audio is already AAC-encoded (may also be ALAC-encoded), create an MP4/M4V file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.m4a -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.m4v

The above method will create a very efficient 0.2 frames per second (-framerate 0.2) H.264 video from the photo while simply adding the audio losslessly. Such very-low-frames-per-second video may present sync problems with subtitles on some players. In this case simply remove the -framerate 0.2 parameter to get a regular 25fps video with the cost of a bigger file size.

The -vf scale=960:-1 parameter tells FFMPEG to resize the image to 960px width and calculate the proportional height. Remove it in case you want a video with the same resolution of the photo. A 12 megapixels photo file (around 4032×3024) will get you a near 4K video.

If the audio is MP3, create an MKV file:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a copy movie.mkv

If audio is not AAC/M4A but you still want an M4V file, convert audio to AAC 192kbps:

ffmpeg -loop 1 -framerate 0.2 -i photo.jpg -i song.mp3 -shortest -c:v libx264 -tune stillimage -vf scale=960:-1 -c:a aac -strict experimental -b:a 192k movie.m4v

See more about FFMPEG photo resizing.

There is also a more efficient and completely lossless way to turn a photo into a video with audio, using extended podcast techniques. But thats much more complicated and requires advanced use of GPAC’s MP4Box and NHML. In case you are curious, see the Podcast::chapterize() and Podcast::imagify() methods in my  music-podcaster script. The trick is to create an NHML (XML) file referencing the image(s) and add it as a track to the M4A audio file.

Image and Photo

Move images with no EXIF header to another folder

mkdir noexif;
exiftool -filename -T -if '(not $datetimeoriginal or ($datetimeoriginal eq "0000:00:00 00:00:00"))' *HEIC *JPG *jpg | while read f; do mv "$f" noexif/; done

Set EXIF photo create time based on file create time

Warning: use this only if image files have correct creation time on filesystem and if they don’t have an EXIF header.

exiftool -overwrite_original '-DateTimeOriginal< ${FileModifyDate}' *CR2 *JPG *jpg

Rotate photos based on EXIF’s Orientation flag, plus make them progressive. Lossless

jhead -autorot -cmd "jpegtran -progressive '&i' > '&o'" -ft *jpg

Rename photos to a more meaningful filename

This process will rename silly, sequential, confusing and meaningless photo file names as they come from your camera into a readable, sorteable and useful format. Example:

IMG_1234.JPG2015.07.24-17.21.33 • Max playing with water【iPhone 6s✚】.jpg

Note that new file name has the date and time it was taken, whats in the photo and the camera model that was used.

  1. First keep the original filename, as it came from the camera, in the OriginalFileName tag:
    exiftool -overwrite_original '-OriginalFileName<${filename}' *CR2 *JPG *jpg
  2. Now rename:
    exiftool '-filename<${DateTimeOriginal} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *HEIC *JPG *jpg
  3. Remove the ‘0’ index if not necessary:
    \ls *HEIC *JPG *jpg *heic | while read f; do
        nf=`echo "$f" | sed -e 's/0.JPG/.jpg/i; s/0.HEIC/.heic/i'`;
        t=`echo "$f" | sed -e 's/0.JPG/1.jpg/i; s/0.HEIC/1.heic/i'`;
        [[ ! -f "$t" ]] && mv "$f" "$nf";
    done

    Alternative for macOS without SED:

    \ls *HEIC *JPG *jpg *heic | perl -e '
    	while (<>) {
    		chop; $nf=$_; $t=$_;
    		$nf=~s/0.JPG/.jpg/i; $nf=~s/0.HEIC/.heic/i;
    		$t=~s/0.JPG/1.jpg/i; $t=~s/0.HEIC/1.heic/i;
    		rename($_,$nf) if (! -e $t);
    	}'
  4. Optional: make lower case extensions:
    \ls *HEIC *JPG | while read f; do
        nf=`echo "$f" | sed -e 's/JPG/jpg/; s/HEIC/heic/'`;
        mv "$f" "$nf";
    done
  5. Optional: simplify camera name, for example turn “Canon PowerShot G1 X” into “Canon G1X” and make lower case extension at the same time:
    \ls *HEIC *JPG *jpg *heic | while read f; do
        nf=`echo "$f" | sed -e 's/Canon PowerShot G1 X/Canon G1X/;
          s/iPhone 6s Plus/iPhone 6s✚/;
          s/iPhone 7 Plus/iPhone 7✚/;
          s/Canon PowerShot SD990 IS/Canon SD990 IS/;
          s/HEIC/heic/;
          s/JPG/jpg/;'`;
        mv "$f" "$nf";
    done

You’ll get file names as 2015.07.24-17.21.33 【Canon 5D Mark II】.jpg. If you took more then 1 photo in the same second, exiftool will automatically add an index before the extension.

Even more semantic photo file names based on Subject tag

\ls *【*】* | while read f; do
	s=`exiftool -T -Subject "$f"`;
	if [[ " $s" != " -" ]]; then         
		nf=`echo "$f" | sed -e "s/ 【/ • $s 【/; s/\:/∶/g;"`;
		mv "$f" "$nf";
	fi;
done

Copy Subject tag to Title-related tags

exiftool -overwrite_original '-ImageDescription< ${Subject}' '-XPTitle<${Subject}' '-Title<${Subject}' '-Description<${Subject}' '-Caption-Abstract<${Subject}' *jpg

Full rename: a consolidation of some of the previous commands

exiftool '-filename<${DateTimeOriginal} • ${Subject} 【${Model}】%.c.%e' -d %Y.%m.%d-%H.%M.%S *CR2 *JPG *HEIC *jpg *heic

Set photo “Creator” tag based on camera model

  1. First list all cameras that contributed photos to current directory:
    exiftool -T -Model *jpg | sort -u

    Output is the list of camera models on this photos:

    Canon EOS REBEL T5i
    DSC-H100
    iPhone 4
    iPhone 4S
    iPhone 5
    iPhone 6
    iPhone 6s Plus
  2. Now set creator on photo files based on what you know about camera owners:
    CRE="John Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/DSC-H100/'            *.jpg
    CRE="Jane Black";  exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/Canon EOS REBEL T5i/' *.jpg
    CRE="Mary Doe";    exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 5/'            *.jpg
    CRE="Peter Black"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 4S/'           *.jpg
    CRE="Avi Alkalay"; exiftool -overwrite_original -creator="$CRE" -by-line="$CRE" -Artist="$CRE" -if '$Model=~/iPhone 6s Plus/'      *.jpg

Recursively search people in photos

If you geometrically mark people faces and their names in your photos using tools as Picasa, you can easily search for the photos which contain “Suzan” or “Marcelo” this way:

exiftool -fast -r -T -Directory -FileName -RegionName -if '$RegionName=~/Suzan|Marcelo/' .

-Directory, -FileName and -RegionName specify the things you want to see in the output. You can remove -RegionName for a cleaner output.
The -r is to search recursively. This is pretty powerful.

Make photos timezone-aware

Your camera will tag your photos only with local time on CreateDate or DateTimeOriginal tags. There is another set of tags called GPSDateStamp and GPSTimeStamp that must contain the UTC time the photos were taken, but your camera won’t help you here. Hopefully you can derive these values if you know the timezone the photos were taken. Here are two examples, one for photos taken in timezone -02:00 (Brazil daylight savings time) and on timezone +09:00 (Japan):

exiftool -overwrite_original '-gpsdatestamp<${CreateDate}-02:00' '-gpstimestamp<${CreateDate}-02:00' '-TimeZone<-02:00' '-TimeZoneCity<São Paulo' *.jpg
exiftool -overwrite_original '-gpsdatestamp<${CreateDate}+09:00' '-gpstimestamp<${CreateDate}+09:00' '-TimeZone<+09:00' '-TimeZoneCity<Tokio' Japan_Photos_folder

Use exiftool to check results on a modified photo:

exiftool -s -G -time:all -gps:all 2013.10.12-23.45.36-139.jpg
[EXIF]          CreateDate                      : 2013:10:12 23:45:36
[Composite]     GPSDateTime                     : 2013:10:13 01:45:36Z
[EXIF]          GPSDateStamp                    : 2013:10:13
[EXIF]          GPSTimeStamp                    : 01:45:36

This shows that the local time when the photo was taken was 2013:10:12 23:45:36. To use exiftool to set timezone to -02:00 actually means to find the correct UTC time, which can be seen on GPSDateTime as 2013:10:13 01:45:36Z. The difference between these two tags gives us the timezone. So we can read photo time as 2013:10:12 23:45:36-02:00.

Geotag photos based on time and Moves mobile app records

Moves is an amazing app for your smartphone that simply records for yourself (not social and not shared) everywhere you go and all places visited, 24h a day.

  1. Make sure all photos’ CreateDate or DateTimeOriginal tags are correct and precise, achieve this simply by setting correctly the camera clock before taking the pictures.
  2. Login and export your Moves history.
  3. Geotag the photos informing ExifTool the timezone they were taken, -08:00 (Las Vegas) in this example:
    exiftool -overwrite_original -api GeoMaxExtSecs=86400 -geotag ../moves_export/gpx/yearly/storyline/storyline_2015.gpx '-geotime<${CreateDate}-08:00' Folder_with_photos_from_trip_to_Las_Vegas

Some important notes:

  • It is important to put the entire ‘-geotime’ parameter inside simple apostrophe or simple quotation mark (), as I did in the example.
  • The ‘-geotime’ parameter is needed even if image files are timezone-aware (as per previous tutorial).
  • The ‘-api GeoMaxExtSecs=86400’ parameter should not be used unless the photo was taken more than 90 minutes of any detected movement by the GPS.

Concatenate all images together in one big image

  • In 1 column and 8 lines:
    montage -mode concatenate -tile 1x8 *jpg COMPOSED.JPG
  • In 8 columns and 1 line:
    montage -mode concatenate -tile 8x1 *jpg COMPOSED.JPG
  • In a 4×2 matrix:
    montage -mode concatenate -tile 4x2 *jpg COMPOSED.JPG

The montage command is part of the ImageMagick package.

Docker on Bluemix with services

Docker on Bluemix with automated full-stack deploys and delivery pipelines

Introduction

This document explains working examples on how to use Bluemix platform advanced features such as:

  • Docker on Bluemix, integrated with Bluemix APIs and middleware
  • Full stack automated and unattended deployments with DevOps Services Pipeline, including Docker
  • Full stack automated and unattended deployments with cf command line interface, including Docker

For this, I’ll use the following source code structure:

github.com/avibrazil/bluemix-docker-kickstart

The source code currently brings to life (as an example), integrated with some Bluemix services and Docker infrastructure, a PHP application (the WordPress popular blogging platform), but it could be any Python, Java, Ruby etc app.

This is how full stack app deployments should be

Before we start: understand Bluemix 3 pillars

I feel it is important to position what Bluemix really is and which of its parts we are going to use. Bluemix is composed of 3 different things:

  1. Bluemix is a hosting environment to run any type of web app or web service. This is the only function provided by the CloudFoundry Open Source project, which is an advanced PaaS that lets you provision and de-provision runtimes (Java, Python, Node etc), libraries and services to be used by your app. These operations can be triggered through the Bluemix.net portal or by the cf command from your laptop. IBM has extended this part of Bluemix with functions not currently available on CloudFoundry, notably the capability of executing regular VMs and Docker containers.
  2. Bluemix provides pre-installed libraries, APIs and middleware. IBM is constantly adding functions to the Bluemix marketplace, such as cognitive computing APIs in the Watson family, data processing middleware such as Spark and dashDB, or even IoT and Blockchain-related tools. These are high value components that can add a bit of magic to your app. Many of those are Open Source.
  3. DevOps Services. Accessible from hub.jazz.net, it provides:
    • Public and private collaborative Git repositories.
    • UI to build, manage and execute the app delivery pipeline, which does everything needed to transform your pure source code into a final running application.
    • The Track & Plan module, based on Rational Team Concert, to let your team mates and clients exchange activities and control project execution.

This tutorial will dive into #1 and some parts of #3, while using some services from #2.

The architecture of our app

Docker on Bluemix with services

When fully provisioned, the entire architecture will look like this. Several Bluemix services (MySQL, Object store) packaged into a CloudFoundry App (bridge app) that serves some Docker containers that in turns do the real work. Credentials to access those services will be automatically provided to the containers as environment variables (VCAP_SERVICES).

Structure of Source Code

The example source code repo contains boilerplate code that is intentionally generic and clean so you can easily fork, add and modify it to fit your needs. Here is what it contains:

bridge-app folder and manifest.yml file
The CloudFoundry manifest.yml that defines app name, dependencies and other characteristics to deploy the app contents under bridge-app.
containers
Each directory contains a Dockerfile and other files to create Docker containers. In this tutorial we’ll use only the phpinfo and wordpress directories, but there are some other useful examples you can use.
.bluemix folder
When this code repository is imported into Bluemix via the “Deploy to Bluemix” button, metadata in here will be used to set up your development environment under DevOps Services.
admin folder
Random shell scripts, specially used for deployments.

Watch the deployment

The easiest way to deploy the app is through DevOps Services:

  1. Click to deploy

    Deploy to Bluemix

  2. Provide a unique name to your copy of the app, also select the target Bluemix space
    Deploy to Bluemix screen
  3. Go to DevOps Services ➡ find your project clone ➡ select Build & Deploy tab and watch
    Full Delivery Pipeline on Bluemix

Under the hood: understand the app deployment in 2 strategies

Conceptually, these are the things you need to do to fully deploy an app with Docker on Bluemix:

  1. Instantiate external services needed by your app, such as databases, APIs etc.
  2. Create a CloudFoundry app to bind those services so you can handle them all as one block.
  3. Create the Docker images your app needs and register them on your Bluemix private Docker Registry (equivalent to the public Docker Hub).
  4. Instantiate your images in executable Docker containers, connecting them to your backend services through the CloudFoundry app.

The idea is to encapsulate all these steps in code so deployments can be done entirely unattended. Its what I call brainless 1-click deployment. There are 2 ways to do that:

  • A regular shell script that extensively uses the cf command. This is the admin/deploy script in our code.
  • An in-code delivery pipeline that can be executed by Bluemix DevOps Services. This is the .bluemix/pipeline.yml file.

From here, we will detail each of these steps both as commands (on the script) and as stages of the pipeline.

  1. Instantiation of external services needed by the app…

    I used the cf marketplace command to find the service names and plans available. ClearDB provides MySQL as a service. And just as an example, I’ll provision an additional Object Storage service. Note the similarities between both methods.

    Deployment Script
    cf create-service \
      cleardb \
      spark \
      bridge-app-database;
    
    cf create-service \
      Object-Storage \
      Free \
      bridge-app-object-store;
    Delivery Pipeline

    When you deploy your app to Bluemix, DevOps Services will read your manifest.yml and automatically provision whatever is under the declared-services block. In our case:

    declared-services:
      bridge-app-database:
        label: cleardb
        plan: spark
      bridge-app-object-store:
        label: Object-Storage
        plan: Free
    
  2. Creation of an empty CloudFoundry app to hold together these services

    The manifest.yml file has all the details about our CF app. Name, size, CF build pack to use, dependencies (as the ones instantiated in previous stage). So a plain cf push will use it and do the job. Since this app is just a bridge between our containers and the services, we’ll use minimum resources and the minimum noop-buildpack. After this stage you’ll be able to see the app running on your Bluemix console.

    Deployment Script
    Delivery Pipeline
    Stage named “➊ Deploy CF bridge app” simply calls cf push;
  3. Creation of Docker images

    The heavy lifting here is done by the Dockerfiles. We’ll use base CentOS images with official packages only in an attempt to use best practices. See phpinfo and wordpress Dockerfiles to understand how I improved a basic OS to become what I need.

    The cf ic command is basically a clone of the well known docker command, but pre-configured to use Bluemix Docker infrastructure. There is simple documentation to install the IBM Containers plugin to cf.

    Deployment Script
    cf ic build \
       -t phpinfo_image \
       containers/phpinfo/;
    
    cf ic build \
       -t wordpress_image \
       containers/wordpress/;
    
    
    Delivery Pipeline

    Stages handling this are “➋ Build phpinfo Container” and “➍ Build wordpress Container”.

    Open these stages and note how image names are set.

    After this stage, you can query your Bluemix private Docker Registry and see the images there. Like this:

    $ cf ic images
    REPOSITORY                                          TAG     IMAGE ID      CREATED     SIZE
    registry.ng.bluemix.net/avibrazil/phpinfo_image     latest  69d78b3ce0df  3 days ago  104.2 MB
    registry.ng.bluemix.net/avibrazil/wordpress_image   latest  a801735fae08  3 days ago  117.2 MB
    

    A Docker image is not yet a container. A Docker container is an image that is being executed.

  4. Run containers integrated with previously created bridge app

    To make our tutorial richer, we’ll run 2 sets of containers:

    1. The phpinfo one, just to see how Bluemix gives us an integrated environment
      Deployment Script
      cf ic run \
         -P \
         --env 'CCS_BIND_APP=bridge-app-name' \
         --name phpinfo_instance \
         registry.ng.bluemix.net/avibrazil/phpinfo_image;
      
      
      IP=`cf ic ip request | 
          grep "IP address" | 
          sed -e "s/.* \"\(.*\)\" .*/\1/"`;
      
      
      cf ic ip bind $IP phpinfo_instance;
      Delivery Pipeline

      Equivalent stage is “➌ Deploy phpinfo Container”.

      Open this stage and note how some environment variables are defined, specially the BIND_TO.

      Bluemix DevOps Services default scripts use these environment variables to correctly deploy the containers.

      The CCS_BIND_APP on the script and BIND_TO on the pipeline are key here. Their mission is to make the bridge-app’s VCAP_SERVICES available to this container as environment variables.

      In CloudFoundry, VCAP_SERVICES is an environment variable containing a JSON document with all credentials needed to actually access the app’s provisioned APIs, middleware and services, such as host names, users and passwords. See an example below.

    2. A container group with 2 highly available, monitored and balanced identical wordpress containers
      Deployment Script
      cf ic group create \
         -P \
         --env 'CCS_BIND_APP=bridge-app-name' \
         --auto \
         --desired 2 \
         --name wordpress_group_instance \
         registry.ng.bluemix.net/avibrazil/wordpress_image
      
      
      cf ic route map \
         --hostname some-name-wordpress \
         --domain $DOMAIN \
         wordpress_group_instance

      The cf ic group create creates a container group and runs them at once.

      The cf ic route map command configures Bluemix load balancer to capture traffic to http://some-name-wordpress.mybluemix.net and route it to the wordpress_group_instance container group.

      Delivery Pipeline

      Equivalent stage is “➎ Deploy wordpress Container Group”.

      Look in this stage’s Environment Properties how I’m configuring container group.

      I had to manually modify the standard deployment script, disabling deploycontainer and enabling deploygroup.

See the results

At this point, WordPress (the app that we deployed) is up and running inside a Docker container, and already using the ClearDB MySQL database provided by Bluemix. Access the URL of your wordpress container group and you will see this:

WordPress on Docker with Bluemix

Bluemix dashboard also shows the components running:

Bluemix dashboard with apps and containers

But the most interesting evidence you can see accessing the phpinfo container URL or IP. Scroll to the environment variables section to see all services credentials available as environment variables from VCAP_SERVICES:

Bluemix VCAP_SERVICES as seen by a Docker container

I use these credentials to configure WordPress while building the Dockerfile, so it can find its database when executing:

.
.
.
RUN yum -y install epel-release;\
	yum -y install wordpress patch;\
	yum clean all;\
	sed -i '\
		         s/.localhost./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_HOSTNAME")/ ; \
		s/.database_name_here./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_NAME")/     ; \
		     s/.username_here./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_USERNAME")/ ; \
		     s/.password_here./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_PASSWORD")/ ; \
	' /etc/wordpress/wp-config.php;\
	cd /etc/httpd/conf.d; patch < /tmp/wordpress.conf.patch;\
	rm /tmp/wordpress.conf.patch
.
.
.

So I’m using sed, the text-editor-as-a-command, to edit WordPress configuration file (/etc/wordpress/wp-config.php) and change some patterns there into appropriate getenv() calls to grab credentials provided by VCAP_SERVICES.

Dockerfile best practices

The containers folder in the source code presents one folder per image, each is an example of different Dockerfiles. We use only the wordpress and phpinfo ones here. But I’d like to highlight some best practices.

A Dockerfile is a script that defines how a container image should be built. A container image is very similar to a VM image, the difference is more related to the file formats that they are stored. VMs uses QCOW, VMDK etc while Docker uses layered filesystem images. From the application installation perspective, all the rest is almost the same. But only only Docker and its Dockerfile provides a super easy way to describe how to prepare an image focusing mostly only on your application. The only way to automate this process on the old Virtual Machine universe is through techniques such as Red Hat’s kickstart. This automated OS installation aspect of Dockerfiles might seem obscure or unimportant but is actually the core of what makes viable a modern DevOps culture.

  1. Being a build script, it starts from a base parent image, defined by the FROM command. We used a plain official CentOS image as a starting point. You must select very carefully your parent images, in the same way you select the Linux distribution for your company. You should consider who maintains the base image, it should be well maintained.
  2. Avoid creating images manually, as running a base container, issuing commands manually and then committing it. All logic to prepare the image should be scripted in your Dockerfile.
  3. In case complex file editing is required, capture edits in patches and use the patch command in your Dockerfile, as I did on wordpress Dockerfile.
    To create a patch:

    diff -Naur configfile.txt.org configfile.txt > configfile.patch

    Then see the wordpress Dockerfile to understand how to apply it.

  4. Always that possible, use official distribution packages instead of downloading libraries (.zip or .tar.gz) from the Internet. In the wordpress Dockerfile I enabled the official EPEL repository so I can install WordPress with YUM. Same happens on the Django and NGINX Dockerfiles. Also note how I don’t have to worry about installing PHP and MySQL client libraries – they get installed automatically when YUM installs wordpress package, because PHP and MySQL are dependencies.

When Docker on Bluemix is useful

CloudFoundry (the execution environment behind Bluemix) has its own Open Source container technology called Warden. And CloudFoundry’s Dockerfile-equivalent is called Buildpack. Just to illustrate, here is a WordPress buildpack for CloudFoundry and Bluemix.

To chose to go with Docker in some parts of your application means to give up some native integrations and facilities naturally and automatically provided by Bluemix. With Docker you’ll have to control and manage some more things for yourself. So go with Docker, instead of a buildpack, if:

  • If you need portability, you need to move your runtimes in and out Bluemix/CloudFoundry.
  • If a buildpack you need is less well maintained then the equivalent Linux distribution package. Or you need a reliable and supported source of pre-packaged software in a way just a major Linux distribution can provide.
  • If you are not ready to learn how to use and configure a complex buildpack, like the Python one, when you are already proficient on your favorite distribution’s Python packaging.
  • If you need Apache HTTPD advanced features as mod_rewrite, mod_autoindex or mod_dav.
  • If you simply need more control over your runtimes.

The best balance is to use Bluemix services/APIs/middleware and native buildpacks/runtimes whenever possible, and go with Docker on specific situations. Leveraging the integration that Docker on Bluemix provides.

WordPress on Fedora with RPM, DNF/YUM

WordPress is packaged for Fedora and can be installed as a regular RPM (with DNF/YUM). The benefits of this method are that you don’t need to mess around with configuration files, filesystem permissions and since everything is pre-packaged to work together, additional configurations are minimal. At the end of this 3 minutes tutorial, you’ll get a running WordPress under an SSL-enabled Apache using MariaDB as its backend.

All commands need to be executed as root. Read More

Microsoft Windows na plataforma Power com KVM e QEMU

Com o lançamento de KVM para Power se aproximando no horizonte, tem se falado muito sobre rodar o Microsoft Windows em Power.

Só uma rápida retrospectiva, KVM é a tecnologia do Kernel do Linux que permite rodar máquinas virtuais de forma muito eficiente. E o QEMU é o software que emula diversos aspectos de um computador (portas serias, rede, BIOS/firmware, disco etc). O QEMU existia antes do projeto KVM e possibilita rodar, de forma razoavelmente lenta devido a emulação de todos os aspectos do hardware, outro sistema operacional completo dentro dele.

Read More

OpenShift for Platform as a Service Clouds

OpenShift-LogoAt the Fedora 20 release party another guy stepped up and presented+demonstrated OpenShift, which was the most interesting new feature from Red Hat for me. First of all I had to switch my mindset about cloud from IaaS (infrastructure as a service, where the granularity are virtual machines) to PaaS. I heard the PaaS buzzword before but never took the time to understand what it really means and its implications. Well, I had to do that at that meeting so I can follow the presentation, of course hammering the presenter with questions all the time.
Read More

GMail as mail relay for your Linux home server

Since my Fedora Post-installation Configurations article, some things have changed in Fedora 20. For example, for security and economy reasons, Sendmail does not get installed anymore by default. Here are the steps to make your Linux home computer be able to send system e-mails as alerts or from things that run on cron. All commands should be run as user root. This is certified to work on Fedora 21.

Read More

Coisas que aprendi e descobri na Latinoware 2013

 

iPhone Call History Database

Either if you are doing forensics or just want better reports about your call patterns, the iPhone Call History database can be very handfull.

If you have a jailbroken iPhone, you can access the database file directly. If you are not, you can still access it offline simply copying the file from an unencrypted iTunes backup to some other folder on you computer to manipulate it. Here are the real files path inside the iPhone and their counterparts on an iTunes backup folder:
Read More

O site de Linux da IBM Brasil foi desativado e isso é bom

O site de Linux da IBM Brasil foi desativado e isso é bom

O site de Linux da IBM Brasil foi desativado e isso é bom
Antigo site IBM Linux

O site oficial de Linux da IBM Brasil ficava em http://ibm.com/br/linux/ e lá era nosso QG virtual quando o Linux Impact Team existia e eu fazia parte dele. Nosso time foi formado na época para estabelecer a idéia de que Linux, Software Livre, Padrões Abertos etc são coisas boas, desmitificar algumas crenças confusas, ajudar clientes IBM a usar Linux com nossos produtos etc.

Read More

Drupal is Gonna Change Your World

Forget expensive and proprietary MS Access. Forget about applications built on top of complex muiltitab spreadsheets. Drupal with Content Construction Kit, Views and Faceted Search are the right and way better solution for you.

Forget about building Flash-only web sites. Drupal and its modules is a better and semantically correct way for your Web 2.0 site.

Forget about PHP, ASP, JSP development from scratch. Drupal and its modules will put your site running faster with near zero programming.

This is a just a note for people building websites and general applications.

Eu, Drupal e a Arrebentação

Dediquei-me nos últimos meses a estudar o Drupal. Fiz isso nas horas vagas (tipo da meia noite às 6 da manhã) e foi uma longa curva de aprendizado.

Para quem não sabe, Drupal é um Sistema de Gerenciamento de Conteúdo (CMS) para a Web. Um fazedor de sites, em outras palavras. Quaisquer sites.

Umas semanas atrás ultrapassei o ponto da arrebentação. Agora é só um mar de calmaria, ou seja, a luta contra o maremoto da falta de conhecimento foi ultrapassado. Minha saga com CMSs começou com o WordPress, quando montei este blog que vos fala. A partir daí aprendi o que é um conteúdo atômico, como gerenciá-lo corretamente, taxonomias, tags, feeds, mashups, blogosfera, e as maravilhas da web semântica.

O WordPress é um CMS otimizado para blogs e por isso ele se dá ao luxo de ser fácil de usar. OK, você pode fazer outros tipos de sites com ele, mas isso exige uma violenta intervenção em seu mecanismo de temas, e ao longo do tempo esse site não-blog se tornará ingerenciável — uma aberração.

Se o WordPress foi feito para fazer blogs, Drupal foi feito para fazer qualquer tipo de site. O custo disso é que os elementos que o constituem são mais abstratos e por consequência mais difíceis de se entender. Além do mais, o Drupal Core por sí só é meio feio, pouco prático e não faz muita coisa.

No processo de aprendizado, é necessário dedicar uma boa lapa de tempo para conhecer seu ecossistema de plugin e extensões. Ultrapassar a arrebentação então constitui em vencer os seguintes passos:

  1. Entender os elementos básicos do Drupal e suas correlações: nó, taxonomia, URLs limpas, módulos, papéis (roles) e permissões, temas, etc
  2. Conhecer um conjunto razoável de módulos que se integram bem e que extendam enormemente a funcionalidade do Core. Alguns exemplos de extensões/módulos poderosíssimos: Views, Content Construction Kit, Busca Facetada, Painéis, CSS Injector

Construi dois sites relativamente complexos, semânticos, com múltiplos tipos de categorias, buscas facetadas, layouts diferenciados, look profissional etc, sem escrever sequer uma linha de código. OK, para não enganar vocês escrevi umas 30 linhas de CSS para embelezar alguns elementos da página. Só. Posso dizer que há algumas dezenas de pessoas encantadas com um deles, rodando na Intranet da minha empresa — IBM— e que ele é tão funcional, simples e interessante que até meu chefe comprou a idéia e está vendendo-o empresa a dentro.

Drupal tem o mérito de juntar duas características importantíssimas que cada uma por sí só já é ultravaliosa:

  1. Seu Core é extremamente bem arquitetado visando economia e extensibilidade total.
  2. Conseguiu montar um ecossistema de extensões que tornam o trabalho (depois da arrebentação) um prazer altamente produtivo.

Há outros CMSs por aí — Plone, Joomla etc —, não os conheço na prática. Mas acho que dificilmente alcançaram a maturidade e a solidez do Drupal. O retorno disso é que Drupal está conquistando algumas referências incríveis como o site da Casa Branca, Sony, MTV, etc.

Anotem esta previsão: Dentro de 2 ou 3 anos, Drupal estará para o mundo dos sites assim como Linux está hoje para o mundo do Sistemas Operacionais — não fará nenhum sentido criar um site sem ele.

WhiteHouse.Gov migrou para Drupal

Drupal é um dos melhores gerenciadores de conteúdo que existem e é Open Source.

Há quem diga que é porque o governo Obama incentiva Open Source blablabla etc. Mas a verdade é que provavelmente a escolha foi pelo melhor: Drupal. No blog do Tim O’reilly, há também mais detalhes sobre a infraestrutura: Red Hat Linux como SO e MySQL como DB.

Eu uso-o diariamente e posso dizer que é extremamente bem arquitetado e tem uma comunidade vibrante. Posso dizer também que não faz nenhum sentido hoje em dia criar um site do zero sem usar uma ferramenta poderosa e flexivel de gestão de conteúdo como o Drupal.

Bem, a prova que o site da Casa Branca roda sobre Drupal está em seu HTML enviado ao browser.

Assinaturas do Drupal no HTML do site da Casa Branca

As partes em destaque são típicas URIs do Drupal.

Se Linux reina hoje no universo dos sistemas operacionais de servidor, Drupal reinará também no universo dos sites em 2 ou 3 anos.

Organize fast and precisely your MP3 files with ID3v2 tags

This is a set of personal notes and a tutorial for everyone about how to correctly organize and tag MP3 files using the id3 command line tool.

General way to tag MP3 files:

id3 -M -2 [-v] [-t title] [-a artist] [-l album] [-n tracknr] [-y year] [-g genre] [-c comment] file.mp3

Recursively tag with ID3v2 a tree with many directories containing MP3 files, setting artist and genre:

id3 -v -2 -R -a "João Gilberto" -g "Bossa Nova" *mp3

Rewrite the Title tag of each file capitalizing the first letter of each word:

id3 -v -2 -t %+t *mp3

Rename files based on track number and song name (as “02 – Song Name.mp3”) padding a zero to track numbers smaller than 10:

id3 -v -2 -f "%#n - %t.mp3" *mp3

Add a suffix to the current Author tag:

id3 -2 -a "%a e Spokfrevo Orquestra" *mp3

Copy current Author tag to the Composer tag:

id3 -v -2 -wTCOM %a *mp3

Use the “Artist” (TPE1) and “Album Artist” (TPE2) tags in a different way to correctly group songs by album on your MP3 player:

id3 -2 -wTPE2 "Various Artists"  Café_Del_Mar_*/*mp3

or, alternatively with the id3v2 program:

id3v2 --TPE2 "Various Artists" Café_Del_Mar_*/*mp3

Scan track number (%n) and song name (%t) from each file name and set them as ID3 respectivelly along with additional artist name and album name:

id3 -2 -a "The Artist Name" -l "The Album Name" -g "The Genre Name" -m "%n - %t.mp3"

The id3 program is available for multiplatforms, including Linux and Windows. You can find RPM packages for Fedora Linux on my site.

A Diferença SUSE↔OpenSUSE e Red Hat↔Fedora

É ainda uma pergunta muito recorrente qual a diferença entre SUSE e OpenSUSE, Fedora e Red Hat. Por mais óbvio que seja para várias pessoas.

Quando comparamos distribuições Linux como SUSE com OpenSUSE e Red Hat com Fedora, são estruturalmente idênticas mas tem alguns componentes internos que diferem na versão. Por exemplo, o OpenSUSE tem kernel versão 2.6.y e o SUSE tem 2.6.x, ou seja, tem linhas do tempo um pouco desincronizadas porque o OpenSUSE busca inovações e o SUSE estabilidade e suporte. Mesma coisa para Fedora e Red Hat respectivamente.

O OpenSUSE não tem suporte formal (só pela comunidade) e o SUSE tem via Novell. O Fedora não tem suporte formal (só pela comunidade) e o Red Hat Linux tem via Red Hat.

Para um fabricante de hardware e software, quando houver algum problema com um de seus produtos rodando junto com esses sistemas operacionais, se ao depurar a causa descobre-se que é um problema no SO, só há compromisso da Novell ou Red Hat de corrigir o problema se o cliente estiver usando um SUSE ou Red Hat cujo suporte foi comprado.

Outra vantagem é que o laboratório de tal fabricante de HW ou SW terá mais facilidade em tentar reproduzir o problema com o SUSE ou Red Hat do que com OpenSUSE ou Fedora porque os primeiros são os suportados e o que têm rodando em seus testes.

Para a Novell, usar o SUSE sem comprar seu suporte anual é a mesma coisa que usar o OpenSUSE. Do ponto de vista de correção de falhas o suporte é inexistente. Mesma coisa para a dupla Fedora↔Red Hat, este último menos confuso por terem nomes completamente diferentes.

Para entender melhor a diferença entre as famílias de Linux SUSE e Red Hat e escolher a melhor, veja este artigo.

A Media Center at Home

Since we got a 52″ Samsung LCD TV almost a year ago as a gift from relatives, I knew it was time to attach to it a dedicated computer and have a full digital media experience in the living room. I’ll tell you here my experiences building and running this thing that makes all my guests very impressed and desiring one.

Things you can do with a Media Center

  1. Play all your digital music (MP3, M4A, FLAC etc) as albums, custom play lists or randomly.
  2. Browse all your digital music semantically, by Genre or Artist or Song Name or Album. This is very practical and much faster than searching for a CD on your shelf.
  3. Tune hundreds of Internet radios that play all kinds of specific music as New Age, 80’s, 70’s, Classical, Flamenco, etc.
  4. Watch movies downloaded from the Internet in Full HD quality (1080p) or almost (720p) with or without subtitles. Who needs Blu-ray ?
  5. Play last trip photos as a slideshow in a 52″ TV. Who needs to develop photos in paper anymore? You can also play in the background music from your MP3 collection while watching the slideshow.
  6. Browse photos by trip, year and people that appear on them (if you tag them).
  7. Watch in a 52″ TV the clips from your last trip.
  8. Download a collection of 80’s music clips, invite your friends and make a very funny multimedia 80’s party.
  9. Watch YouTube videos in a 52″ TV.
  10. Browse Google Maps in 52″ TV.
  11. Control all the above using a nice handy $20 remote control.
  12. Let your iPhone/iPod browse, access and play all your music as it is loaded on your iPhone, through UPnP and PlugPlayer.

How to build a Media Center

Its easy and cheap to build a Media Center. In fact, the most expensive component is the TV, not the computer. You can do it with whatever operating system you like: Linux, Windows Vista or Mac. I wanted to do it with Linux because I am more fluent with this platform, but I had to use Vista because Linux audio drivers for my computer were not ready at that time. I’ll put bellow all the conceptual components in an modular way so you can understand what is important on each. But usually you will find them together in a single board very well integrated. In fact, unless you know what you are doing, I recommend using integrated components as motherboards that have a good GPU plus audio integrated in a single HDMI output connector.

The physical ingredients to build a Media Center are:

  1. An LCD TV. Looks like Plasma is an obsolete technology but I’m not the right person to ask about that. An LCD or Plasma TV is a plain big computer monitor, there is no big differences when compared to the computer monitor you are using right now to read this. Make sure the TV you buy has HDMI input connector, is Full HD (that is, its physical resolution goes up 1920×1080 (a.k.a. 1080p) or more) or at least is Full HD Ready (its maximum physical resolutions is less than 1920×1080 but can handle 1920×1080 signals with distortion), has a VGA input connector and a stereo audio input connector.
  2. A regular dedicated computer with at least a dual core CPU and 2GB RAM. This will be connected to the TV and forget about using it as a regular desktop. Intel or AMD will do here. If you will play only those low-quality, old, 700MB DivX/Xvid files, a generation before dual core (as AMD Turion) will do, but if you are going to enter the HD world with H.264 (a.k.a x264), MP4, MKV, you’ll need at least 2 cores. About the 2GB RAM, this is a guess and you may play well with a bit less too, but never tested. My system is a Quadcore AMD Phenom, 4GB RAM (because I use it for other purposes too) into a XFX 8200 HDMI-enabled motherborad (this board has unsolved issues with audio over HDMI and high power CPUs, thus I would recommend you look for another brand or model).
  3. A video card/chip that can go up to 1920×1080 resolution with DVI or HDMI output connector. People keep saying that you need NVidia and this is a lie, let me explain. NVidia or ATI GPUs (graphical processing units) have capabilities and hardware accelerators used by advanced 3D games, not by video players. So unless you are going to use this PC also as an advanced playing station, any GPU (a.k.a. graphic card/chip) will do the job, including those very popular Intel GPUs found on board in laptops. Just make sure to configure your BIOS and set video RAM to the maximum, otherwise you will have video delay problems playing Full HD (1080p) videos. If the video card only has VGA output, thats fine too but be aware that you’ll need extra cables for audio. Read next item to understand.
  4. An audio card that outputs 7, 8 or 13 channels of sound. Stereo (2 channels) is old school. Today’s any regular DVD has 5.1 (6 channels) surround audio (2 front, 2 rear, 1 center and 1 sub-woofer) and you want to take advantage of that. This is today very common and easy to find in stores, just make sure this component is integrated with the video component above and both use one single HDMI output connector.
  5. Remote Control. Your folks will call you a complete geek if they’ll see you browsing photos and music with a keyboard and mouse. Out of fashion. I bought a simple but effective infrared remote control that has a receiver that plugs into the USB for about $20. It has specific buttons for Pictures, Video, Music and works well with Vista Media Center.
  6. Lots of storage. If you are going to collect HD movies, rip DVDs, store photos and rip all your CDs, start with at least 1TB hard drive. Also make sure you have internal space in your computer to receive additional hard drives because you will run out of space sooner or latter. Another option is to have a motherboard with external SATA connectors (similar to USB connectors) and connect external SATA hard drives for increased speed and flexibility. An example of such an external SATA storage is Sagate’s FreeAgent XTreame.
  7. A silent power supply. Nobody thinks about that but I believe this is very important. Since this PC will stay in your living room or some place for multimedia contemplation, you don’t want to be disturbed by the computer’s fan noise while listening to your collection of zen Ambient music. Spend a few dollars more and make sure your power supply is quiet. I am a happy and zen user of a 450W Huntkey power supply.
  8. HDMI cable. This is the single cable you should use to connect the Media Center PC to your TV. This single cable should carry Full HD video and 13 channels audio, it should costs $20 and is a clean and modern solution.


Good network layout for a home Media Center

These are the aproximate brazilian prices I pay for the hardware parts

Description Part Number Price US$
Motherboard XFX 8200 GeForce MI-A78S-8209 $172.22
AMD Phenom Quadcore 9750 HD9750WCGHBOXSN $338.89
Seagate Barracuda 750GB 9BX156-303 $205.56
4GB RAM $133.33
HUNTKEY Power supply 14CM EPS12V LW-6450SG 450W $94.44
HDMI cable $16.67
Nice PC case $138.89
Gotec Remote Control 3801 for Media Center $26.61
Total $1,126.61

Home Networking

You may want to have Media Center(s) in several spots of your home playing media from a central network file server located somewhere else.

You should pay attention to not overload your home wireless network. I had bad experiences streaming HD media from one computer to another over WiFi. A single wall in between can dramatically decrease the kilobits per second the wireless signal can carry, to a level that is lower than your movie’s kilobits per second. The result are unwatchable movies while streaming. Big photos will also take longer to load to a point that will affect negatively your ambient slideshow.

To avoid that:

  1. Have your files physically connected to your Media Center. This can be a plain internal disk (this is my choice) or an external SATA or FireWire or USB attached disk. Remember that USB is much slower (even than FireWire) and file transfers (as copying lots of movies to/from a frined) will take longer time.
  2. Have a separate file server but connect it to your Media Center over a wired network.

Bad network layout for a home Media Center

Software Requirements

Your Media Center will have several simultaneous purposes. The most visible one is to feed your TV with content, but I also use it as a host to run several virtual machines, a web server, file server and to download things. I use mine 40% as a visible Media Center, 30% as a Media Server (to serve media to other computers) and 30% as a host for other purposes.

Forget about using your Media Center as a regular PC with keyboard and mouse. It is simply not practical and will prevent your wife and kids to use it because you are locking its TV. You can connect to and work with it remotely though, with SSH, VNC, Desktop Sharing, Remote Desktop or whatever technology your platform supports. And this can happen while your folks are watching a movie. I found this way of managing my Media Center very practical and productive.

  • Linux-based Media Center

    Linux would be my preferred platform for running a Media Center. It is highly configurable and gives its owner a lot of power. To feed your TV, use MythTV or XBMC. Just make sure that devices as remote control, audio and HDMI interface have drivers and will work on Linux. I had problems with that.

  • Mac OS-based Media Center

    If you are an Apple person, a Mac mini will do the job. It is compact, silent, has a strong enough processor and comes with a nice remote control. If Mac OS is your platform of choice, use FrontRow or XBMC. You will also need a codecs to play all types of media, so download the free Perian codec pack. I don’t know much people that use Mac OS as a Media Center, let me know if you do. You can also use an Apple machine to run Windows.

  • Windows Vista-based Media Center

    Windows Vista has a lot of improvements for managing media when compared to Windows XP. The native File Explorer support for MP3 and photo tagging is excelent, uses open standards as ID3v2 (MP3) and EXIF and IPTC (JPEG photo) and Vista Media Center has partial support for browsing you media collection through these tags (album, artist, genre, date picture was taken, IPTC tags etc). Strangelly, Vista Media Center does not support browsing by multiple genres and multiple artists so an album simultaneously tagged with genres “Samba” and “MPB” will appear only when you list by “Samba”, not by “MPB”.

    Microsoft locks their desktop operating systems in a way that multiple users can’t use it simultaneously, even if there are multiple users created on the OS. This can be fixed installing a small terminal services-related patch. There is also a post-SP1 version of the hack.

    So the modus operandi is to create one user called Media that will automatically login and run the Media Center program at boot, and another one for me to login remotely with Remote Desktop and run stuff simultaneously. The Media user has to be administrator and codec packs and plugin must be installed by him.

    To play advanced and HD audio and video, H.264, MKV, MP4, DivX/Xvid, FLAC etc, you will also need a codec pack for Windows. I recommend the K-Lite Codec Pack and I use its Mega edition. Having that, Vista Media Center will play any type of media.

    I must tell that Windows alone can’t satisfy all my media management needs. Thats why I run a Linux as a virtual machine on the Media Center to make massive manipulations of MP3, photos, video compression, etc.

Still on Vista Media Center, I use several useful plugins:

  • Media Control. Improves usability of the remote control and lets you set subtitle and audio languages, enables fast forwarding etc while playing video.
  • Google Maps for Windows Media Center. Turns my 52″ TV into an interactive map that I can control with my remote control. I don’t know how life was before this.
  • Yougle. Lets you access Internet media from Vista Media Center. In other words, lets you browse and watch YouTube videos, Flickr photos, Internet radios etc.

Happy entertainment !

Cara Comunidade Debian e Ubuntu,

Gostaria de saber o termômetro de uso do Debian e do Ubuntu em vossas empresas, escolas e comunidades.

Ubuntu está mais no desktop? Já migrou para os servidores ?

Ubuntu domina o desktop e Debian o servidor ?

Windows no desktop e Linux no servidor? Qual Linux ?

Nem um nem outro ?

Abra seu coração e emita sua opinião !!

I was invaded again: more passwords for you not to use

It took some time to have my honeypot invaded again in a useful way. They were attacking constantly but were using it only to connect to IRC rooms. Useless for me.

But yesterday a sweet cracker invaded it again and left behind about 240000 non-unique combinations of user+password he was using to invade others.

They were all merged into the Compromised Passwords file, now with 142002 unique combinations of user+password that you must never use.

Enjoy.

Compromised Passwords

Hacker emblemI love to see crackers attacking my computers. I even wrote a Linux Magazine article about a successful invasion (portuguese).

Since recently, I leave a virtual machine with an easy password ready to be attacked, just to see these crackers’ leftovers, their rootkits and what they’ll do once they brake into my system.

Guess what. Once inside, they use my system to attack more or to create DDoS-capable networks ready to put some website down. They are not looking for my files, nor my other passwords. They are actually looking for my compute power and want to preserve their anonymity through the use of my Internet address. Pretty useless lifestyle.

Those crackers will rarely explore some Apache or Samba vulnerability to get inside. Few of them have brains for this difficult task. They manage to get inside a system simply because some user password is very simple. They use port scanners to find your SSH TCP port open and then try thousands of combinations of user+password until they login. Once inside, they install this same scanning tools and user+password databases to attack other computers on the Internet.

I started to collect this user+password databases in an alphabetically sorted file and realease it here so people may check if some password they are choosing is well known by crackers or not.

The compromised passwords file has currently 142002 different combinations of user+password that you should not use, and can be downloaded here.

Some classical combinations you’ll find in the file are:

root changeme
root default
root l1nuxb0x
root passw0rd
root password
oracle oracle123
sapdb sapdb

Remember, you will greatelly avoid having your Linux/Unix/BSD/Mac computer being attacked simply following this rules:

  1. Disable SSH “root” login having the following in your /etc/ssh/sshd_config file:
    PermitRootLogin no
  2. Don’t use a user+password combination that can be found in the file I linked above. Use passwords that are difficult to guess.
  3. Change your SSH daemon listening port to something different than the default 22 in /etc/ssh/sshd_config. This is a bit inconvenient — but very effective — because you’ll have to remember to provide the non-standard port everytime you want to access your system.

Pretty obvious, but I keep finding Internt-connected systems with very weak and well known passwords.

Open Source com Maturidade

Para quem quer ter um exemplo de uma abordagem madura para a adoção de Open Source em uma empresa, é só ver como a Petrobras faz, num artigo no Convergência Digital.

Alguns trechos empolgantes de Jorge Sued, gerente de TIC da Petrobras:

  • “já houve seleções em que aplicações baseadas no ‘open source’ foram vencedoras, não por questão de custo, mas por qualidade técnica”
  • “Não há nada grátis no modelo empresarial do ‘open source’. A nossa idéia é normatizar porque é um mundo novo. Há questões de suporte, manutenção e evolução que precisamos e queremos definir”
  • “O software livre está ganhando espaço. A prova é que a Petrobras vai criar normas para utilizá-lo”

Quando faço palestras sobre Open Source versus Closed Source, enfatizo os seguintes pontos:

  1. Empresas devem usar e adquirir, de preferência, somente software maduro para o objetivo desejado.
  2. O modelo Closed Source atinge maturidade segundo este ciclo: (a) desenvolvimento ⇒ (b) estabelecimento de suporte local ⇒ (c) uso massivo ⇒ (a) mais desenvolvimento.
  3. O modelo Open Source atinge maturidade segundo este ciclo: (a) desenvolvimento ⇒ (b) uso massivo ⇒ (c) estabelecimento de suporte suporte local ⇒ (a) mais desenvolvimento.

A única diferença entre os dois modelos é a ordem em que as coisas acontecem.

Isso também me lembra algo que um colega costuma dizer: “Open Source só é de graça se o seu tempo não vale nada”.

Escrevi extensamente sobre este assunto em vários artigos: Open Source na Prática, Open Source geeks should read this, Quando abrir o código fonte.

Linux cada vez mais forte em PCs de uso específico

Notícias como as do Banco do Brasil migrando seus 40.000 ATMs para Linux será cada vez mais freqüente.

Explico. Uma empresa que precisa manter uma infraestrutura de computadores largamente espalhados, deve ter uma equipe preocupada com os seguintes pontos:

  1. Obter suporte para o sistema operacional
  2. Desenvolver customizações do sistema operacional
  3. Desenvolver aplicações de negócio específicas que rodam nesses computadores
  4. Desenvolver camadas de integração entre essas aplicações e os serviços centrais da empresa
  5. Desenvolver novas funções de acordo com constantes requisições da área de negócios
  6. Preparar procedimento de instalação e atualização automatizado para tudo isso
  7. Testar constantemente tudo isso

Com excessão do primeiro ítem, todos devem acontecer obrigatoriamente dentro da empresa — ou terceirizado para uma equipe que trabalha de forma tão integrada com a empresa que pode também ser considerada interna.

Para uma equipe que precisa mergulhar tão profundamente na tecnologia de um projeto como esses, adicionar o primeiro ítem em seu conjunto de responsabilidades pode até ser benéfico porque traz mais controle.

Então, em casos como esse, ao invés de sair no mercado em busca de suporte pago ao SO, criar inteligência interna para prover esse suporte é um passo pequeno e fácil de ser absorvido, além de reduzir custos de licenças e suporte anual.

Empresas deste tipo são os grandes varejistas como Casas Bahia, Pão de Açucar, Droga Raia, Lojas Marabraz, etc — todos eles casos de sucesso no uso de Linux em seus Pontos de Venda.

Da perspectiva de TI, grandes bancos não são muito diferentes de varejistas. São idênticos na verdade, em termos de arquiteturas e fluxo da informação.

Banrisul e Bando do Brasil são os pioneiros no Brasil a entrar neste caminho, e anunciam o ingresso dos outros monumentais Itaús e Bradescos e Unibancos neste domínio de ATMs e caixas rodando Linux. Simplesmente porque não faz o menor sentido técnico e financeiro usar qualquer outro sistema operacional.

Psicopatas usam Slackware !?

Juro para vocês, a seguinte notícia apareceu no jornal sensacionalista Hora H hoje. Como não tem permalink, segue o screenshot mesmo.

Psicopatas usam Slackware !?

Só faltou isto para ficar definitivamente provado que usuários de Slackware são assassinos e bebem o sangue de suas vítimas !

Bem, a notícia é verdadeira mas minha afirmação foi brincadeira… Eu, como a maioria dos usuários antigos de Linux, começamos com Slackware e é uma grande distribuição.

Fonte: Rodrigo Missiaggia

Índice Linux Journal, Março de 2008

  1. Porcentagem de usuários que clicam em propagandas pelo menos uma vez por mês: 1
  2. Número de vezes por mês que tais usuários clicam em propagandas: 1
  3. De cada três pessoas que clicam em propaganda, o número de mulheres: 2
  4. Bilhões de dólares americanos projetados em 2008 em vendas de propaganda: 486
  5. Bilhões de dólares americanos projetados em 2008 em vendas de propaganda on-line: 44.6
  6. Porcentagem do mercado projetado para 2008 referente a propaganda on-line: 9.4
  7. Anos necessários para que propaganda on-line ultrapasse propaganda em rádio: 0
  8. Anos necessários para que propaganda on-line ultrapasse propaganda em revista: 2
  9. Linhas de código buscadas por Koders.com: 766,893,913
  10. Linhas de código buscadas por KruglePublic, em bilhões: 2.6
  11. Repositórios de código buscados por Krugle.com: 600
  12. Milhares de projetos buscados por KruglePublic: 100
  13. Número de projetos registrado em SourceForge: 164,138
  14. Número de usuários registrado em SourceForge: 1,744,635
  15. Serviços listados no Sourceforge Marketplace em seu lançamento: 600
  16. Milhões de celulares baseados em Linux vendidos pela Motorola: 9
  17. Porcentagem do portfólio de celulares da Motoroal servidos pela plataforma Linux MotoMagx da companhia: 60
  18. Porcentagem de CPUs Linux rodando AMD Athlon: 14.71
  19. Porcentagem de CPUs Linux rodando Pentium 4: 12.15
  20. Porcentagem de CPUs Linux rodando outros Pentiums: 24.97

Fontes

  • 1–3: AOL
  • 4–8: AdAge
  • 9: Koders.com
  • 10–12: Krugle.com
  • 13, 14: SourceForge.net
  • 15: CNN.com
  • 16, 17: AmericasNetwork.com
  • 18–20: Linux Counter (counter.li.org); numbers gathered by December 9, 2007

Por Doc Searls. Original: http://www.linuxjournal.com/article/9976#mpart1

Índice Linux Journal, Fevereiro de 2008

  1. Número de processadores x86 necessários para executar a mesma quantidade de trabalho de um mainframe IBM System z: 250
  2. Consumo de energia de um mainframe zSeries como uma porcentagem daquela necessária por 250 x86 processadores: 2
  3. Porcentagem de todos os servidores físicos que serão virtualizados em 2011: 50
  4. Número de parceiros no Google Open Handset Alliance para sua plataforma de telefone baseada em Linux, Android: 30
  5. Número de funcionários do Google trabalhando na iniciativa Android: 100
  6. Milhões de telefones móveis vendidos mundialmente no 3° trimestre de 2007: 289
  7. Bilhões de dólares mínimo que o Google vai oferecer no leilão do espectro americano de 700MHz: 4.6
  8. Número de fabricantes de motores de locomotivas que se deram bem no negócio de motores a disel: 0
  9. Bilhões de linhas telefônicas no mundo: 4
  10. Bilhões de contas de telefone móvel: 2.68
  11. Milhões entregas de dispositivos habilitados para Bluetooth em 2007: 800
  12. Bilhões de telefones Nokia em uso: 900
  13. Idade da Nokia como uma companhia: 142
  14. Bilhões que a Nokia está gastando para se tornar uma “consumer Web media company”: 9
  15. Bilhões de telefones móveis que serão vendidos em 2008: 1.3
  16. Porcentagem de telefones móveis que serão vendidos em 2008 na Asia/Pacífico: 82
  17. Porcentagem de Linux entre as top 10 companhias de hosting mais confiáveis segundo a Netcraft em setembro de 2007: 50
  18. Porcentagem de Linux entre as top 3 companhias de hosting mais confiáveis segundo a Netcraft em setembro de 2007: 100
  19. Porcentagem de Linux entre as top 48 companhias de hosting mais confiáveis segundo a Netcraft em setembro de 2007: 43.75
  20. Porcentagem das top 48 companhias de hosting mais confiáveis que são Linux, FreeBSD, Solaris ou F5 Big-IP (BSD-based) em setembro de 2007: 66.7

Fontes

  • 1, 2: IBM and its Power Estimator Tool, CNN
  • 3: IDC, via Guardian.co.uk
  • 4: The Register
  • 5–7, 12–16: Forbes
  • 8: Bob Frankston
  • 9, 10: Trends in Telecommunication Reform 2007, from the ITU, via Dilanchian.com.au
  • 11: Laptop Magazine
  • 17–20: Netcraft.com

Por Doc Searls. Original: http://www.linuxjournal.com/article/9943#mpart1

FISL 2008 e Open Source na IBM

Algumas pessoas andaram me perguntando por que a IBM não está participando do FISL 2008.

Percebia em seu tom de voz (ou nas linhas do Jabber/Google Talk) que na verdade perguntavam se a IBM está parando de suportar Open Source. Não está.

Open Computing

A IBM continua a suportar Linux, Open Source e Padrões Abertos de forma absolutamente central.

Muitos dos produtos mais estratégicos da IBM tem elementos Open Source em seu núcleo, como o Eclipse, Apache, diversas bibliotecas etc. Além do mais, Linux ocupa um papel fundamental como plataforma operacional para uma série de iniciativas de Cloud Computing, Grid, Web Services etc.

Linux é inclusive a plataforma sugerida, sempre que possível, na implementação de soluções em clientes.

Open Source versus Proprietary Software

Sem usar meu chapéu da empresa, minha opinião é que a IBM tem uma estratégia muito bem balanceada sobre como usar e integrar Open Source. Muita inovação ainda acontece no universo do software de código fonte fechado e não há porque não explorar isso comercialmente. Mas é muito claro dentro da empresa que Open Source está deixando de ser algo somente para a infraestrutura e vem avançando no middleware e níveis acima também. Isto é muito be visto como uma rota natural da indústria de software. E se por acaso a IBM demonstra uma resistência ou outra quando algum Open Source passa a concorrer diretamente com um de nossos produtos, isso é só um reflexo do instinto de preservação que em seguida abre espaço para adaptação.

A IBM este ano não está participando do FISL por uma questão tática temporária interna. Alguns grupos mudaram, pessoas se movimentaram (como acontece em todas as empresas) e simplesmente não deu tempo. Só isso.

Mas ficou uma vontade enorme de ir.