Monday, August 28, 2017

Raspberry Pi Zero Flickr Photoframe


I came across a used 20" Apple Cinema Display(A1081 - 1680x1050) on the street. Even though I'm not an Apple user, I couldn't resist picking it up. It didn't have an integrated power supply(A design practice which probably gave Apple many fans but put off many like me), so I bought one(24v 5A) off eBay for $16. Surprisingly the monitor worked flawlessly.

I initially thought of selling this setup on Craigslist but then had the idea of converting it into a digital photo frame. I stripped off the LCD panel from the monitor frame with this intent. I planned to use a Raspberry Pi Wireless which only has an HDMI port but the display was run on DVI. So I had to get an HDMI to DVI converter. That also meant that I would have to bundle up the 6+ feet DVI cable behind the panel creating a mess. While researching more on the converter, I realized that both interfaces carry pretty much the same signals  - 4 TMDS channels and 1 I2C channel.
So I decided to the cut the DVI cable short and solder it (18 wires) directly to a cut up mini HDMI cable. This link helped.

HDMI to DVI
HDMI to DVI mapping


Now it was time to hook it up to a Raspberry Pi Zero Wireless and test it out. The display came up without any glitches confirming that the HDMI to DVI conversion went through flawlessly.

I made a simple frame for the panel from an L-cross section length of wood from Lowe's and attached all the electronics with double sided tape on the back of the panel. I chopped off half inch wide aluminum panels from the original display frame and used it as feet for the photo frame.

Using Python, I enabled it to display Flickr Explore images by downloading daily Explore photos and kicking off eog image viewer. With some trivial modifications to the script, it can display images from Pi's SD card. Or it could download personal photos from Google photos and display it.

Notes:
1. I still don't have any control on the display brightness. I think the display defaults to brightness control over USB. If you know a way to control brightness of display let me know in the comments.
2. I am using eog utility to view images which doesn't have any image transition effects. If you know an image viewer which can display transition effects, let me know in the comments.


Frame and aluminum stand
(Note 3mm x 8mm bolts used to brace the panel)
Slot for aluminum stand
Joint glued with Elmer's wood glue
Back panel
Touch controls
5v tapped for RPI power
Raspberry Pi Zero W
(Note DVI to HDMI conversion)



On stand back view

On stand side view


On stand front view


 Final


Update 2018/03/23:
  •  Added support in code to randomly display images from one of the following sources:
    1. Flickr Explore
    2. Boston Globe Bigpictures
    3. Local storage
  • Added support for daily reboot 
  • Added support to wait for daytime to display images and shut off display/HDMI otherwise
  •  Replaced CFL back-lighting with LED back-lighting
 
 
 
 
 
 Python Code



#!/usr/bin/env python

from bs4 import BeautifulSoup
import sys
import flickrapi
import urllib2
import os
import shutil
import glob
import datetime as dt
import subprocess
import signal
import time
import random
import re

def main():
    # Infinite loop
    while True:
        # Turn off display
        subprocess.Popen(['xset', 'dpms', 'force', 'off'])
        subprocess.Popen(['vcgencmd', 'display_power', '0'])
        check_hours([22,23,0,1,2,3,4,5,6],wait=True)
        get_best_explore_images('explore_images',15,20) # Images from Flickr Explore
        get_globe_images('globe_images',60) # Images from Boston Globe Big pictures
        # Turn on display
        subprocess.Popen(['vcgencmd', 'display_power', '1'])
        subprocess.Popen(['xset', 'dpms', 'force', 'on'])
        while check_hours([7,8,9,10,11,12,13,14,15,16,17,18,19,20,21],wait=False) is True:
            display_images()
        subprocess.Popen(['sudo', 'reboot'])
            
def display_images():            
    choice=random.choice([0,1,2,3,4])
    if choice is 0:
        print 'Using Explore list'
        #proc=subprocess.Popen(['feh', '-FxYzZ', '-D', '15', '-K', 'captions',images_dir],preexec_fn=os.setsid)
        proc=subprocess.Popen(['feh', '-FxYz', '--zoom', 'fill', '-D', '15', '-K', 'captions','explore_images'],preexec_fn=os.setsid) # Zoom images to fill screen
    elif choice is 1:
        print 'Using BigPicture list'
        #proc=subprocess.Popen(['feh', '-FxYzZ', '-D', '15', '-K', 'captions',images_dir],preexec_fn=os.setsid)
        proc=subprocess.Popen(['feh', '-FxYz', '--zoom', 'fill', '-D', '15', '-K', 'captions','globe_images'],preexec_fn=os.setsid) # Zoom images to fill screen
    else:
        file_list_list=['2004.list','2005.list','2006.list','2007.list','2008.list','2009.list','2010.list',
                        '2011.list','2012.list','2013.list','2014.list','2015.list','2016.list','2017.list',
                        'phone.list']
        file_list=random.choice(file_list_list)    
        print 'Using list '+file_list
        shutil.copyfile('lists/'+file_list,'current.list')
        proc=subprocess.Popen(['feh', '-FxYzZd', '-D', '15', '--auto-rotate', '-f', 'current.list'],preexec_fn=os.setsid)
    time.sleep(3600)
    # Send kill signal to proc
    os.killpg(os.getpgid(proc.pid),signal.SIGTERM)

# Waits till time is in specified window on same day hour2 should be greater than hour1
def check_hours(hours,wait=False):
    while True:
        now=dt.datetime.now()
        if(now.hour in hours):
            if wait is True:
                print 'Waiting hour {}'.format(now.hour)
                time.sleep(600)
            else:
                return True
        else:
            return False

def get_best_explore_images(images_dir,history,daily_count):
    cleanup_dir(images_dir,history)
    today = dt.datetime.today().date()
    date = today            
    while ((today-date)<dt.timedelta(days=history)):
        photo_list=get_explore_images_list(str(date),images_dir,daily_count)
        download_photo_list(photo_list,images_dir)
        date=date-dt.timedelta(days=1)

def get_globe_images(images_dir,history):
    date = dt.datetime.today().date()
    cleanup_dir(images_dir,history)
    photo_list=get_bg_photos_list(history)
    download_photo_list(photo_list,images_dir)

def cleanup_dir(images_dir,history):
    today = dt.datetime.today().date()
    caption_dir = images_dir+'/captions'
    if not os.path.exists(images_dir):
        os.mkdir(images_dir)
    if not os.path.exists(caption_dir):
        os.mkdir(caption_dir)
    file_list = glob.glob(images_dir+'/*.jpg')
    file_list = file_list+glob.glob(caption_dir+'/*.txt')
    for my_file in file_list:
        file_date=dt.datetime.fromtimestamp(os.path.getmtime(my_file)).date()
        if((today-file_date)>dt.timedelta(days=history)):
            print 'Deleting {}'.format(my_file)
            os.remove(my_file)
        #else:
        #    print 'Not deleting {} [{}]'.format(my_file,file_date)

def get_explore_images_list(date_s,images_dir,num_images):
    print 'Getting {} top explore photos for {}'.format(num_images,date_s)
    api_key     = u'' # Get API Key/Secret from Flickr
    api_secret  = u''
    photo_list  = []
    flickr      = flickrapi.FlickrAPI(api_key, api_secret, format='parsed-json')
    try:
        explore     = flickr.interestingness.getList(date=date_s,per_page=num_images)# Max 500
    except Exception as message:
        print 'Exception {}'.format(message)
        return photo_list
    explore_list  = explore['photos']['photo']
    for photo in explore_list:
        photo_name  = '{}_{}_h.jpg'.format(photo['id'],photo['secret'])
        photo_title = '{}'.format(photo['title'].encode('utf-8')) 
        (url,ar) = get_photo_url(flickr,photo['id'])
        if url == '':
            print 'Skipping    [{:.2f}] {:60s} {}'.format(ar,photo_title,url)
            continue
        print 'Adding [{:.2f}] {:60s} {} to download list'.format(ar,photo_title,url)
        photo_list.append((date_s,photo_name,photo_title,url))
    return photo_list

def download_photo_list(photo_list,images_dir):
    for (date_s,photo_name,photo_title,url) in photo_list:
        full_path   = images_dir+'/'+photo_name
        if not os.path.isfile(full_path):
            print 'Downloading [{:s}] {:60s} {}'.format(date_s,photo_title[:50],url)
            try:
                response = urllib2.urlopen(url)
                if(response.getcode() == 200):       
                    photo_file = response.read()
                    with open(full_path,'wb') as output:
                        output.write(photo_file)
                    subprocess.call(['touch','-d',date_s,full_path])
                    with open(images_dir+'/captions/'+photo_name+'.txt','wb') as output:
                        output.write(photo_title)
                else:
                    print 'Download error'
            except Exception as message:
                print 'URL open exception {}'.format(message)

def get_photo_url(flickr,photo_id_in):                
    sizes=flickr.photos.getSizes(photo_id=photo_id_in)
    url=''
    ar=0
    for size in sizes['sizes']['size']:
        #if((size['label'] == 'Large'     )or
        #   (size['label'] == 'Large 1600')):
        #    url=size['source']
        ar=float(size['width'])/float(size['height'])
        if((ar > 1.4) and (ar < 1.8)):
            url=size['source']
    return (url,ar)

def get_bg_photos_list(history):
    url_base = 'http://www.bostonglobe.com'
    print 'Getting stories..'
    html = urllib2.urlopen(url_base+'/news/bigpicture')
    soup=BeautifulSoup(html, "lxml")
    stories = soup.find_all('a',{'class':'pictureInfo-headline'})
    globe_photos=[]
    p1=re.compile('^.+(\d\d\d\d/\d\d/\d\d)/.+$')
    today = dt.datetime.today().date()
    for story in stories:
        url=url_base+story.attrs['href']
        story_name=story.find_all(text=True)[0].encode('utf-8')
        m=p1.match(url)
        if(m):
            date_s = m.group(1).replace("/","-")
        date=dt.datetime.strptime(date_s,'%Y-%m-%d').date()
        if((today-date)<dt.timedelta(days=history)):
            print 'Getting image list for ['+date_s+'] '+story_name
            story_pics=get_bg_story_pics(date_s,story_name,url)
            globe_photos.extend(story_pics)
    return globe_photos 

def get_bg_story_pics(date_s,story_name,url):
    html = urllib2.urlopen(url)
    soup = BeautifulSoup(html, "lxml")
    photos   = soup.find_all('div',{'class':'photo'})
    captions = soup.find_all('div',{'class':'gcaption geor'})
    i=0
    story_pics=[]
    p2=re.compile('^.+/([^/]+)$')
    for photo in photos:
        img = photo.find_all('img')
        #print img[0].attrs['src']
        text=captions[i].find_all(text=True)
        photo_title= story_name+' - '+text[1].encode('utf-8')
        url = img[0].attrs['src']
        url = url[2:]
        url = 'http://'+url
        m=p2.match(url)
        if(m):
            photo_name = m.group(1)
            story_pics.append((date_s,photo_name,photo_title,url))
        i=i+1
    return story_pics


if __name__ == '__main__':
    main()