天天看点

有趣的Fizzbuzz面试题

话说Fizz Buzz是什么鬼?

Fizz Buzz是洋人小朋友在学除法时常玩的游戏,玩法是:从1数到100,如果遇见了3的倍数要说Fizz,5的倍数就说Buzz,如果即是3的倍数又是5的倍数就说FizzBuzz。

最后演变为一个编程面试题:写一个程序输出1到100,但是如果遇到数字为3的倍数时输出Fizz,5的倍数输出Buzz,既是3的倍数又是5的倍数输出FizzBuzz。

原题目链接:http://joelgrus.com/2016/05/23/fizz-buzz-in-tensorflow/

翻译版:http://blog.topspeedsnail.com/archives/11010

以下为个人写的python小程序`

for i in range(,):
    if ( i%3 ==  and i%5 ==):
        print ("FizzBuzz")
    elif i%3==:
        print("Fizz")
    elif (i%5==):
        print("Buzz")
    else:
        print(i)
           

实验证明是正确的。列出部分实验结果如下:

有趣的Fizzbuzz面试题

但是如何用C++实现呢?对于我一个C++小白来说,最好的方法就是借鉴网上大拿们的code了

#include <iostream>
using namespace std;
int main(){
    for(int i=;i<=;i++){
        if(i%==)cout<<"FizzBuzz"<<endl;
        else if(i%==)cout<<"Buzz"<<endl;
        else if(i%==)cout<<"Fizz"<<endl;
        else cout<<i<<endl;
    }
    return ;
}
           

实验结果如图所示:

有趣的Fizzbuzz面试题

以下是某位大神用TensorFlow写出来的程序,他使用两层全连接网络玩了一下,准确率还可以。

import numpy as np
import tensorflow as tf
def binary_encode(i, num_digits):
    return np.array([i >> d &  for d in range(num_digits)])
def pro_data():
    data_set_list=[binary_encode(i,) for i in range(,,)]
    data_set=np.array(data_set_list)
    data_lable=[]
    for i in range(,,):
        if i%==:
            data_lable.append([,,,])
        elif i%==:
            data_lable.append([,,,])
        elif i%==:
            data_lable.append([,,,])
        else:
            data_lable.append([,,,])
    data_lable=np.array(data_lable)
    return data_set,data_lable
def predict2word(num,prediction):
    return ['fizzbuzz','buzz','fizz',str(num)][prediction]
def train_model(epoch=):
    train_data,train_label=pro_data()
    X=tf.placeholder('float32',[None,])
    Y=tf.placeholder('float32',[None,])
    weights1=tf.Variable(tf.random_normal([,]))
    bias1=tf.Variable(tf.random_normal([]))
    weights2=tf.Variable(tf.random_normal([,]))
    bias2=tf.Variable(tf.random_normal([]))
    weights3 = tf.Variable(tf.random_normal([,]))
    bias3 = tf.Variable(tf.random_normal([]))
    fc1=tf.nn.relu(tf.matmul(X,weights1)+bias1)
    fc2=tf.nn.relu(tf.matmul(fc1,weights2)+bias2)
    out=tf.matmul(fc2,weights3)+bias3
    cost=tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(out,Y))
    train_op=tf.train.GradientDescentOptimizer().minimize(cost)
    predict_op = tf.argmax(out, )
    sess = tf.Session()
    init_op=tf.initialize_all_variables()
    sess.run(init_op)
    for i in range(epoch):
        batch_size=
        rand_oder=np.random.permutation(range(len(train_data)))
        train_data,train_label=train_data[rand_oder],train_label[rand_oder]
        for j in range(,len(train_data)-,batch_size):
            end=j+batch_size
            sess.run(train_op,feed_dict={X:train_data[j:end],Y:train_label[j:end]})
        print(i, np.mean(np.argmax(train_label, axis=) == sess.run(predict_op, feed_dict={X: train_data, Y: train_label})))
    numbers = np.arange(, )
    test_data = np.transpose(binary_encode(numbers, ))
    test_label = sess.run(predict_op, feed_dict={X: test_data})
    output = np.vectorize(predict2word)(numbers, test_label)
    print(output)
if __name__=='__main__':
    train_model()
           

附传送门,学习各位大佬们的智慧结晶:

  1. http://blog.csdn.net/koon/article/details/1540780
  2. http://blog.csdn.net/dgh_dean/article/details/54575778 这是一篇原文章的翻译,用深度学习做的,但是那位面试者没有通过面试。
  3. 各种语言来解决:http://coding.memory-forest.com/fizzbuzz%E6%9C%89%E4%BD%95%E8%A7%A3%EF%BC%9F.html
  4. 题目出处 ,一位面试者各种在面试中写出来的小程序,慎点:https://www.cnblogs.com/webary/p/6507413.html

继续阅读