2010년 3월 20일 토요일

[About alchemy]mac,linux에 설치, swc생성

http://labs.adobe.com/wiki/index.php/Alchemy:Documentation:Getting_Started#Steps
http://thesven.com/?p=140

터미널 명령어
gcc stringecho.c -O3 -Wall -swc -o stringecho.swc

[링크]픽셀벤더(pixelbender)관련 자료

http://www.adobe.com/devnet/pixelbender/

http://codeonwort.tistory.com/category/%ED%94%BD%EC%85%80%20%EB%B2%A4%EB%8D%94


About Pixel bender : http://babogomdori.textcube.com/58

Pixel bender Tutorial #1 : http://www.diebuster.com/?p=751
Pixel bender Tutorial #2 : http://www.diebuster.com/?p=759
Pixel bender Tutorial #3 : http://www.diebuster.com/?p=770
Pixel bender Tutorial #4 : http://www.diebuster.com/?p=947

Pixel bender Tutorial : http://wonderfl.net/tag/PixelBenderTutorial

이미지 프로세싱 알고리즘 : http://www.ph.tn.tudelft.nl/Courses/FIP/noframes/fip-Contents.html

Pixel bender 개발자 안내서 번역본 (1/2) : http://codeonwort.tistory.com /entry/픽셀-벤더-개발자-안내서-12
Pixel bender 개발자 안내서 번역본 (2/2) : http://codeonwort.tistory.com /entry/픽셀-벤더-개발자-안내서-22

pixel bender (1) - grayScale에 대해서 : http://cafe.naver.com/uiaa/87
pixel bender (2) : http://cafe.naver.com/uiaa/100
pixel bender (3) - 연습문제 : http://cafe.naver.com/uiaa/128
pixel bender (4) - 윤곽검출 : http://cafe.naver.com/uiaa/143

Pixel Bender Exchange : http://www.adobe.com/cfusion/exchange/index.cfm?event=productHome&exc=26&loc=en_us

Pixel bender Example : http://www.anttikupila.com/flash/pixel-bender-levels-example/

Ryan Phelan (Pixel Bender category) : http://www.rphelan.com/category/pixel-bender/

[링크]라이센스 정리

http://codeonwort.tistory.com/4

[링크] OpenCV를 iphone에서 사용하는 방법법

관련링크

http://www.computer-vision-software.com/blog/2009/04/opencv-vs-apple-iphone/comment-page-1/

http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en

http://zaaghad.blogspot.com/2009/02/universal-i386arm-opencv-framework-for.html

Xcode 템플릿 프로젝트 사용

프로젝트를 복제해서 생성해주는 기능.. 말그대로 템플릿
아래경로에 프로젝트 폴더를 붙여넣기

Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Project Templates/Application

2010년 3월 19일 금요일

Mac 웹서버 사용과 , httpdphp 모듈 설정

링크
http://devist.tistory.com/7
http://devist.tistory.com/7

Mac(Xcode)에서 openCV 설치,사용하기

설치링크
http://ttti07.egloos.com/3524657

openCV 는 C기반 영상처리 오픈소스

1. 관련링크

- macports 로 설치하는 방식
http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port

- 패키징된 소스로 설치 하는 방식
http://www.wowjerry.com/36

- Xcode에서 사용, C++ 로 개발 하는 방법
http://anybody-has-a-blog.tistory.com/80

2010년 3월 17일 수요일

구글리더 API

구글리더 api
http://www.niallkennedy.com/blog/2005/12/google-reader-api.html
http://guldook.blogspot.com/2005/12/rss-api.html
http://decoder.tistory.com/55

Google Reader
Google Reader is an online feed aggregator with heavy use of JavaScript and pretty quick loading of the latest feed data from around the web. Google's AJAX front-end styles back-end data published in the Atom syndication format. The data technologies powering Google Reader can easily be used and extended by third-party feed aggregators for use in their own applications. I will walk you through the (previously) undocumented Google Reader API.

Update 10:40 p.m.:Jason Shellen, PM of Google Reader, called me to let me know that Google built its feed API first and the Google Reader application second as a demonstration of what could be done with the underlying data. Jason confirmed my documentation below is very accurate and Google plans to release a feed API "soon" and perhaps within the next month! Google Reader engineer Chris Wetherell has also confirmed the API in the comments below.

A reliable feed parser managed by a third party lowers the barrier to entry of new aggregator developers. Google and its team of engineers and server clusters can handle the hard work of understanding feeds in various formats and states of validation, allowing developers to focus on the interaction experience and other differentiating features. You can also retrieve and synchronize feed subscription lists with an established user base that could be in the millions, providing a better experience for users on multiple devices and platforms. Google Reader's "lens" provides only one view of the available data.

Google Reader users are assigned a 20-digit user ID used throughout Google's feed system. No cookies or session IDs are required to access this member-specific data. User-specifc data is accessible using the google.com cookie named "SID."
Feed retrieval

/reader/atom/feed/

Google converts all feed data to Atom regardless of its original publication format. All RSS feed post content appears in the summary element and unlike the My Yahoo! backend I found no additional metadata about the feed containing full posts but Google does publish content data where available.

You may request any feed from the Google Reader system using the following URL structure:

* http://www.google.com/reader/atom/feed/ + [Feed URL]
* Niall Kennedy's Weblog (RSS 2.0)
* Niall's Flickr feed (Atom 0.3)
* del.icio.us popular (RDF)

You may specify the total number of feed entries to retrieve using the n parameter. The default number of feed items returned is 20 (n=20).

Google strips off all the data it does not render in Reader. Stripped data includes namespaced data such as Apple iTunes podcast data and Yahoo! Media RSS, additional author data such as e-mail and home URL, and even copyright data.
Subscription list

/reader/atom/user/[user id]/pref/com.google/subscriptions

Google Reader's feed subscription list contains a user's current feed subscriptions as well as past deleted subscriptions. Each feed is contained in an entry complete with feed URL, published and updated dates, and user-specific tags, if present. Current subscriptions are categorized as a reading list state. You may request the full list of feeds by setting the complete to true.

Here is a copy of my Google Reader subscription list with my user ID zeroed out. I am not subscribed to my RSS feed (index.xml) and I have added tags to my Atom feed. Each listed feed contains an author element which appears to be empty regardless of declarations within the original feed. Perhaps Google plans to add some feed claiming services, but its own Google blog has no affiliated author information.
Reading list

/reader/atom/user[user id]/state/com.google/reading-list

My favorite feature of the Google Reader backend is direct access to a stream of unread entries across all subscribed feeds. Google will output the latest in a "river of news" style data view.

Here is a sample from my limited subscription set. You may specify the total number of entries you would like Google to return using the n parameter -- the default is 20 (n=20).
Read items only

http://www.google.com/reader/atom/user/[user ID]/state/com.google/read

You can retrieve a listing of read items from Google Reader. You might want to analyze the last 100 items a user has read to pull out trends or enable complete search and this function may therefore be useful. You may adjust the number of items retrieved using the n parameter -- the default is 20 (n=20).
Reading list by tag

/reader/atom/user/[user id]/label/[tag]

You may also view a list of recently published entries limited to feeds of a certain tag. If you have tagged multiple feeds as "marketing" you might want to request just the latest river of news for those marketing feeds. The returned feed contains both read and unread items. Read items are categorized as read (state/com.google/read) if you would like to hide them from view. The number of returned results may be adjusted using the n parameter.
Starred items only

/reader/atom/user[user id]/state/com.google/starred

Google Reader users can flag an item with a star. These flagged items are exposed as a list of entries with feed URL, tags, and published/updated times included. You may specify the total number of tagged entries to return using the n parameter -- the default value is 20 (n=20).

Google treats starred items as a special type of tag and the output therefore matches the tag reading list.
Add or delete subscriptions

/reader/api/0/edit-subscription

You may add any feed to your Google Reader list using the Google Reader API via a HTTP post.

* /reader/api/0/edit-subscription -- base URL
* ac=["subscribe" or "unsubscribe"] -- requested action
* s=feed%2F[feed URL] -- your requested subscription
* T=[command token] -- expiring token issued by Google. Obtain your token at /reader/api/0/token.

Add tags

/reader/api/0/edit-tag

You may also add tags to any feed or individual item via a HTTP post.

* /reader/api/0/edit-tag -- base URL
* s=feed%2F[feed URL] -- the feed URL you would like to tag
* i=[item id] -- the item ID presented in the feed. Optional and used to tag individual items.
* a=user%2F[user ID]%2Flabel%2F[tag] -- requested action. add a tag to the feed, item, or both.
* a=user%2F[user ID]%2Fstate%2Fcom.google%2Fstarred -- flag or star a post.
* T=[special scramble] -- three pieces of information about the user to associate with the new tag. Security unknown and therefore unpublished.

Conclusion

It is possible to build a your own feed reader on top of Google's data with targeted server calls. You can power an application both online and offline using Google as your backend and focus on building new experiences on top of the data. Advanced functionality is available with a numeric Google ID and some variable tweaks.

Google has built the first application on top of this data API, the Google Reader lens, and judging from their choice of URLs the lens may not be Google's last application built on this data set. I like the openness of the data calls and think the Google Reader APIs are simple enough to bootstrap a few new applications within Google or created by third-party developers.

opencv iphone에서의 사용 코드

opencv 를 아이폰 적용 코드 학습을 위해 기록 한다.
시작!

#import "OpenCVTestViewController.h"

#import

@implementation OpenCVTestViewController
@synthesize imageView;

- (void)dealloc {
AudioServicesDisposeSystemSoundID(alertSoundID);
[imageView dealloc];
[super dealloc];
}

#pragma mark -
#pragma mark OpenCV Support Methods

// NOTE you SHOULD cvReleaseImage() for the return value when end of the code.
- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);

IplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3);
cvCvtColor(iplimage, ret, CV_RGBA2BGR);
cvReleaseImage(&iplimage);

return ret;
}

// NOTE You should convert color mode as RGB before passing to this function
- (UIImage *)UIImageFromIplImage:(IplImage *)image {
NSLog(@"IplImage (%d, %d) %d bits by %d channels, %d bytes/row %s", image->width, image->height, image->depth, image->nChannels, image->widthStep, image->channelSeq);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return ret;
}

#pragma mark -
#pragma mark Utilities for intarnal use

- (void)showProgressIndicator:(NSString *)text {
//[UIApplication sharedApplication].networkActivityIndicatorVisible = YES;
self.view.userInteractionEnabled = FALSE;
if(!progressHUD) {
CGFloat w = 160.0f, h = 120.0f;
progressHUD = [[UIProgressHUD alloc] initWithFrame:CGRectMake((self.view.frame.size.width-w)/2, (self.view.frame.size.height-h)/2, w, h)];
[progressHUD setText:text];
[progressHUD showInView:self.view];
}
}

- (void)hideProgressIndicator {
//[UIApplication sharedApplication].networkActivityIndicatorVisible = NO;
self.view.userInteractionEnabled = TRUE;
if(progressHUD) {
[progressHUD hide];
[progressHUD release];
progressHUD = nil;

AudioServicesPlaySystemSound(alertSoundID);
}
}

- (void)opencvEdgeDetect {
if(imageView.image) {
cvSetErrMode(CV_ErrModeParent);

// Create grayscale IplImage from UIImage
IplImage *img_color = [self CreateIplImageFromUIImage:imageView.image];
IplImage *img = cvCreateImage(cvGetSize(img_color), IPL_DEPTH_8U, 1);
cvCvtColor(img_color, img, CV_BGR2GRAY);
cvReleaseImage(&img_color);

// Detect edge
IplImage *img2 = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
cvCanny(img, img2, 64, 128, 3);
cvReleaseImage(&img);

// Convert black and whilte to 24bit image then convert to UIImage to show
IplImage *image = cvCreateImage(cvGetSize(img2), IPL_DEPTH_8U, 3);
for(int y=0; yheight; y++) {
for(int x=0; xwidth; x++) {
char *p = image->imageData + y * image->widthStep + x * 3;
*p = *(p+1) = *(p+2) = img2->imageData[y * img2->widthStep + x];
}
}
cvReleaseImage(&img2);
imageView.image = [self UIImageFromIplImage:image];
cvReleaseImage(&image);

[self hideProgressIndicator];
}
}

- (void) opencvFaceDetect:(UIImage *)overlayImage {
if(imageView.image) {
cvSetErrMode(CV_ErrModeParent);

IplImage *image = [self CreateIplImageFromUIImage:imageView.image];

// Scaling down
IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3);
cvPyrDown(image, small_image, CV_GAUSSIAN_5x5);
int scale = 2;

// Load XML
NSString *path = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_default" ofType:@"xml"];
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL);
CvMemStorage* storage = cvCreateMemStorage(0);

// Detect faces and draw rectangle on them
CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.2f, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(20, 20));
cvReleaseImage(&small_image);

// Create canvas to show the results
CGImageRef imageRef = imageView.image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef contextRef = CGBitmapContextCreate(NULL, imageView.image.size.width, imageView.image.size.height,
8, imageView.image.size.width * 4,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), imageRef);

CGContextSetLineWidth(contextRef, 4);
CGContextSetRGBStrokeColor(contextRef, 0.0, 0.0, 1.0, 0.5);

// Draw results on the iamge
for(int i = 0; i < faces->total; i++) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

// Calc the rect of faces
CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i);
CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef, CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale));

if(overlayImage) {
CGContextDrawImage(contextRef, face_rect, overlayImage.CGImage);
} else {
CGContextStrokeRect(contextRef, face_rect);
}

[pool release];
}

imageView.image = [UIImage imageWithCGImage:CGBitmapContextCreateImage(contextRef)];
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);

cvReleaseMemStorage(&storage);
cvReleaseHaarClassifierCascade(&cascade);

[self hideProgressIndicator];
}
}


#pragma mark -
#pragma mark IBAction

- (IBAction)loadImage:(id)sender {
if(!actionSheetAction) {
UIActionSheet *actionSheet = [[UIActionSheet alloc] initWithTitle:@""
delegate:self cancelButtonTitle:@"Cancel" destructiveButtonTitle:nil
otherButtonTitles:@"Use Photo from Library", @"Take Photo with Camera", @"Use Default Lena", nil];
actionSheet.actionSheetStyle = UIActionSheetStyleDefault;
actionSheetAction = ActionSheetToSelectTypeOfSource;
[actionSheet showInView:self.view];
[actionSheet release];
}
}

- (IBAction)saveImage:(id)sender {
if(imageView.image) {
[self showProgressIndicator:@"Saving"];
UIImageWriteToSavedPhotosAlbum(imageView.image, self, @selector(finishUIImageWriteToSavedPhotosAlbum:didFinishSavingWithError:contextInfo:), nil);
}
}

- (void)finishUIImageWriteToSavedPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo {
[self hideProgressIndicator];
}

- (IBAction)edgeDetect:(id)sender {
[self showProgressIndicator:@"Detecting"];
[self performSelectorInBackground:@selector(opencvEdgeDetect) withObject:nil];
}

- (IBAction)faceDetect:(id)sender {
cvSetErrMode(CV_ErrModeParent);
if(imageView.image && !actionSheetAction) {
UIActionSheet *actionSheet = [[UIActionSheet alloc] initWithTitle:@""
delegate:self cancelButtonTitle:@"Cancel" destructiveButtonTitle:nil
otherButtonTitles:@"Bounding Box", @"Laughing Man", nil];
actionSheet.actionSheetStyle = UIActionSheetStyleDefault;
actionSheetAction = ActionSheetToSelectTypeOfMarks;
[actionSheet showInView:self.view];
[actionSheet release];
}
}

#pragma mark -
#pragma mark UIViewControllerDelegate

- (void)viewDidLoad {
[super viewDidLoad];
[[UIApplication sharedApplication] setStatusBarStyle:UIStatusBarStyleBlackOpaque animated:YES];
[self loadImage:nil];

NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"Tink" ofType:@"aiff"] isDirectory:NO];
AudioServicesCreateSystemSoundID((CFURLRef)url, &alertSoundID);
}

- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
return NO;
}

#pragma mark -
#pragma mark UIActionSheetDelegate

- (void)actionSheet:(UIActionSheet *)actionSheet clickedButtonAtIndex:(NSInteger)buttonIndex {
switch(actionSheetAction) {
case ActionSheetToSelectTypeOfSource: {
UIImagePickerControllerSourceType sourceType;
if (buttonIndex == 0) {
sourceType = UIImagePickerControllerSourceTypePhotoLibrary;
} else if(buttonIndex == 1) {
sourceType = UIImagePickerControllerSourceTypeCamera;
} else if(buttonIndex == 2) {
NSString *path = [[NSBundle mainBundle] pathForResource:@"lena" ofType:@"jpg"];
imageView.image = [UIImage imageWithContentsOfFile:path];
break;
} else {
// Cancel
break;
}
if([UIImagePickerController isSourceTypeAvailable:sourceType]) {
UIImagePickerController *picker = [[UIImagePickerController alloc] init];
picker.sourceType = sourceType;
picker.delegate = self;
picker.allowsImageEditing = NO;
[self presentModalViewController:picker animated:YES];
[picker release];
}
break;
}
case ActionSheetToSelectTypeOfMarks: {
if(buttonIndex != 0 && buttonIndex != 1) {
break;
}

UIImage *image = nil;
if(buttonIndex == 1) {
NSString *path = [[NSBundle mainBundle] pathForResource:@"laughing_man" ofType:@"png"];
image = [UIImage imageWithContentsOfFile:path];
}

[self showProgressIndicator:@"Detecting"];
[self performSelectorInBackground:@selector(opencvFaceDetect:) withObject:image];
break;
}
}
actionSheetAction = 0;
}

#pragma mark -
#pragma mark UIImagePickerControllerDelegate

- (UIImage *)scaleAndRotateImage:(UIImage *)image {
static int kMaxResolution = 640;

CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);

CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if (width > kMaxResolution || height > kMaxResolution) {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}

CGFloat scaleRatio = bounds.size.width / width;
CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
CGFloat boundHeight;

UIImageOrientation orient = image.imageOrientation;
switch(orient) {
case UIImageOrientationUp:
transform = CGAffineTransformIdentity;
break;
case UIImageOrientationUpMirrored:
transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
break;
case UIImageOrientationDown:
transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationDownMirrored:
transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
break;
case UIImageOrientationLeftMirrored:
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationLeft:
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationRightMirrored:
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
case UIImageOrientationRight:
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
default:
[NSException raise:NSInternalInconsistencyException format:@"Invalid image orientation"];
}

UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scaleRatio, scaleRatio);
CGContextTranslateCTM(context, -height, 0);
} else {
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

return imageCopy;
}

- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingImage:(UIImage *)image
editingInfo:(NSDictionary *)editingInfo
{
imageView.image = [self scaleAndRotateImage:image];
[[picker parentViewController] dismissModalViewControllerAnimated:YES];
}

- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker {
[[picker parentViewController] dismissModalViewControllerAnimated:YES];
}
@end

2010년 3월 16일 화요일

flash에서 임베드한 폰트 flash빌더나 , Flex에서 사용하기


 

폰트 임베드에는 아리송한 문제들이 참 많다

용량 때문에 Flex에서 임베드할 때 필요한 문자들의 유니코드를 별도 선언해서 용량을 줄이는 방법도 있지만

오늘 정리할 내용은 flash에서 임베드한 폰트를 사용 하는 법이다.


 

일단 임베드

텍스트 필드 하나 만들고 임베드를 한다 ALL하면 용량 너무 많이 차지하니 꼭 필요한 텍스트 만 선택



 

텍스트 필드를 선택후 F8꽝 눌러서 MovieClip 생성하고 링크 걸어 준다



 

플래시 빌더를 위해 SWC로 내보내기



 

플래시 빌더에서 lib폴더에 swc 저장 해주고 로드해서 사용해 보았다.

플래시에서 선언해준 링키지 클래스(fontMC)를 선언하고 enumerateFonts로 임베드된 폰트 확인 해본다.

mx패키지의 textInput에는 setStyle로 적용하거나

mx_internal::getTextField로 텍스트 필드에 직접 임베드 셋팅 해주면 된다!


 

spark패키지의 인풋텍스트에도 setStyle로 하면 되나TextField를 참조 하려고 봤더니 mx랑은 많이 다르다 .

textDisplay.textFlow 에다가 fontFamily 설정!


 

임베드 적용된 모습(세번째는 일반 텍스트 인풋)

첫번째 mx :: TextInput

두번째 s::TextInput



 

mx텍스트 필드는 임베드 안된 글자를 입력하면 backspace누른듯 입력된 글자가 지워지지만

Spark 텍스트에는 임베드 안된 한글 같은 문자를 입력해도 이쁘게 입력 된다 참 착하다 ~


 

추가 팁 : Flash에서 폰트 임베드할 때 custom anti-Aliasing은

mx_internal::getTextField로 받은 TextField에 설정 해주면 설정 된다

tf.sharpness = 100;

tf.thickness = 100;



 

소스

    <fx:Script>

        <![CDATA[

            import mx.core.mx_internal;

            import mx.events.FlexEvent;

            import mx.events.SliderEvent;

            

            import spark.components.RichEditableText;

            

            private
var tf:TextField;

            protected
function application1_creationCompleteHandler(event:FlexEvent):void

            {

                

                var fontTF:fontMC = new fontMC();

                var arr:Array = Font.enumerateFonts(false);

                for(var i in arr)

                {

                    trace(arr[i].fontName);

                }

                

                var fm:TextFormat = new TextFormat(arr[0].fontName);

                tf = TextField(txtIn.mx_internal::getTextField());

                tf.embedFonts = true;

                tf.defaultTextFormat = fm;

                

                //txtIn.setStyle("fontFamily",arr[0].fontName);

                

                sTxtInput.textDisplay.textFlow.fontFamily = arr[0].fontName;

            }

        ]]>

    </fx:Script>

    <mx:TextInput id="txtIn" fontSize="20"/>

    <s:TextInput id="sTxtInput" fontSize="20"/>

    <s:TextInput fontSize="20"/>

    
 


 


 

플렉스에서 컴파일 옵션중에 define활용법

방법

개발하는 프로젝트의 프로퍼티에 컴파일러 설정 창을 보시면 Additional compiler arguments 가 있슴다~



 

-define=NDRIVE::AIR,false  -define=NDRIVE::FLEX,true 이런식으로 컴파일시에 사용 되는 변수를 할당이 가능

코드에서 이렇게 사용이 가능 (bool 형태가 아닌 다른 데이터 타입도 가능 합니다)

If(NDRIVE::FLEX == true){

    trace("true일때만 실행");

}


 

그런데! 플렉스에서는 단순한 변수가 아닌 클래스나, 메소드, 변수 선언 자체제어가 가능

이렇게 titleBG라는 클래스를 두번 선언 가능 ~ (선언된 값이 true인 것 만 생성)

AIR와 Flex가 공용으로 사용하는 클래스가 있을 경우에 활용


 

[Embed(source="//assets/image/poptitle_upload.png")]

NDRIVE::FLEX

private
var titleBG:Class;

                                       
 

[Embed(source="//assets/image/top_bg.png")]

NDRIVE::AIR

private
var titleBG:Class;


 

컴파일 옵션은 종류도 다양

-keep-generated-actionscript=true : as 코드로 변환된 파일을 생성해주는 옵션

-theme = test.css : 지정된 경로의 css를 기본으로 설정

등이 있슴


 

그외 옵션은 아래 페이지 참고

http://livedocs.adobe.com/flex/3/html/help.html?content=compilers_14.html#157203


 

추가::플래시에서도 변수 선언이 가능

퍼블리싱 옵션에서 AS3 설정 -> Config constants 탭

(단, 변수용도로만 사용 가능 하고 플렉스처럼 생성자체를 제어하지는 못함.)