天天看點

python和機器學習代碼中遇到的問題1.pycharm運作pyspark代碼,沒有Hadoop環境2.讀取檔案編碼問題3.Rating參數個數問題4.同時存在Python2和Python3,運作pyspark報錯

1.pycharm運作pyspark代碼,沒有Hadoop環境

Could not locate executable null\bin\winutils.exe in the Hadoop binaries.           

解決方案

解壓一份Hadoop包,配置HADOOP_HOME并加入Path變量中。

2.讀取檔案編碼問題

SyntaxError: (unicode error) 'unicodeescape' codec can't decode
 bytes in position 2-3: truncated \xXX escape           

錯誤代碼

檔案路徑用的是斜杠

lines = spark.textFile("C:\xin\code\temp\ratings.dat")           

應該用反斜杠

lines = spark.textFile("C:/xin/code/temp/ratings.dat")           

3.Rating參數個數問題

__new__() takes 4 positional arguments but 5 were given

代碼

model = ALS.train(training, rank=50, iterations=10, lambda_=0.01)           

報錯資訊

File "C:\ProgramData\Anaconda3\lib\site-packages\pyspark\mllib\recommendation.py", line 233, in <lambda>
    ratings = ratings.map(lambda x: Rating(*x))
TypeError: __new__() takes 4 positional arguments but 5 were given           

報錯說,要4個參數,但是給你5個。

看源碼

( File "C:ProgramDataAnaconda3libsite-packagespysparkmllibrecommendation.py", line 30)

class Rating(namedtuple("Rating", ["user", "product", "rating"])):           

這裡其實要的是3個參數。

4.同時存在Python2和Python3,運作pyspark報錯

File "/usr/lib/python2.7/site-packages/pyspark-2.3.1-py2.7.egg/pyspark/worker.py", line 176, in main
    ("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 2.7 than that in driver 3.6, PySpark cannot run with
different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.           

在"~/.bashrc"添加配置(重新開機虛拟機)。

export PATH="/root/anaconda3/bin:$PATH"