scala 函数式编程 实例

scala 函数式编程 实例1 遍历 foreach foreach f A gt Unit Unit scala gt val a List 1 2 3 4 val a List Int List 1 2 3 4 scala gt a foreach x Int gt println x 1

大家好,我是讯享网,很高兴认识大家。

1、遍历 foreach

foreach(f: (A) => Unit): Unit

 scala> val a = List(1,2,3,4) val a: List[Int] = List(1, 2, 3, 4) scala> a.foreach((x:Int) => {println(x)}) 1 2 3 4 scala> a.foreach((x:Int) => println(x)) --类型推断,不需要指定 1 2 3 4 scala> a.foreach(x => println(x)) --类型推断,不需要指定 1 2 3 4 scala> a.foreach(println(_)) --下划线简化函数定义(参数在函数体中只出现一次的情况,且函数体内没有嵌套调用时可使用) 1 2 3 4 

讯享网

2、映射 map

讯享网scala> val a = List(1,2,3,4) val a: List[Int] = List(1, 2, 3, 4) scala> a.map(x=>x+1) val res1: List[Int] = List(2, 3, 4, 5) scala> a.map(_+1) --下划线简化函数定义(参数在函数体中只出现一次的情况,且函数体内没有嵌套调用时可使用) val res2: List[Int] = List(2, 3, 4, 5) scala> a.map[String](x => s"${x}x") val res5: List[String] = List(1x, 2x, 3x, 4x) scala> a.map(x => s"${x}a") val res7: List[String] = List(1a, 2a, 3a, 4a) 

3、扁平化映射 flatMap

先map,然后flatten

scala> val a = List("hadoop hive spark flink flume", "kudu hbase sqoop storm") val a: List[String] = List(hadoop hive spark flink flume, kudu hbase sqoop storm) scala> scala> a.map(_.split(" ")) val res8: List[Array[String]] = List(Array(hadoop, hive, spark, flink, flume), Array(kudu, hbase, sqoop, storm)) scala> scala> res8.flatten val res9: List[String] = List(hadoop, hive, spark, flink, flume, kudu, hbase, sqoop, storm) scala> scala> a.flatMap(_.split(" ")) val res10: List[String] = List(hadoop, hive, spark, flink, flume, kudu, hbase, sqoop, storm) scala> 

4、过滤 filter

讯享网scala> val a = List(1,2,3,4,5,6,7,8,9) val a: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9) scala> a.filter(x => x % 2 == 0) val res12: List[Int] = List(2, 4, 6, 8) scala> a.filter(_ % 2 == 0) val res11: List[Int] = List(2, 4, 6, 8) 

5、排序

sorted 默认排序


讯享网

scala> var a = List(3,1,2,9,7) var a: List[Int] = List(3, 1, 2, 9, 7) scala> a.sorted val res13: List[Int] = List(1, 2, 3, 7, 9) 

sortBy 指定字段排序 def sortBy[B](f: (A) => B): List[A]

讯享网 scala> var a = List("01 hadoop", "02 flume", "03 hive", "04 spark") var a: List[String] = List(01 hadoop, 02 flume, 03 hive, 04 spark) scala> a.sortBy(_.split(" ")(1)) val res14: List[String] = List(02 flume, 01 hadoop, 03 hive, 04 spark) 

sortWith 自定义排序 def sortWith(lt: (A,A) => Boolean): List[A]

scala> val a = List(2,3,1,6,4,5) val a: List[Int] = List(2, 3, 1, 6, 4, 5) scala> a.sortWith((x,y)=>x<y) val res15: List[Int] = List(1, 2, 3, 4, 5, 6) scala> a.sortWith((x,y)=>x>y) val res16: List[Int] = List(6, 5, 4, 3, 2, 1) scala> a.sortWith(_ > _) val res17: List[Int] = List(6, 5, 4, 3, 2, 1) 

6、分组 groupBy

def groupBy[K](f: (A) => K): Map[K, List[A]]

讯享网 scala> val a = List("zhangsan"->"m","lisi"->"f","wangwu"->"m") val a: List[(String, String)] = List((zhaan,m), (lisi,f), (wangwu,m)) scala> a.groupBy(x => x._2) val res18: scala.collection.immutable.Map[String,List[(String, String)]] = HashMap(f -> List((lisi,f)), m -> List((zhaan,m), (wangwu,m))) scala> res18.map(x=>x._1 -> x._2.size) val res19: scala.collection.immutable.Map[String,Int] = HashMap(f -> 1, m -> 2) 

7、聚合 reduce

reduce(reduceLeft) 与 reduceRight

def reduce[A1 >: A](op: (A1, A1) => A1): A1 A1是A的元素类型, op中的第一个A1表示op的结果值,第二个A1表示下个元素

scala> val a = List(1,2,3,4,5,6,7,8,9,10) val a: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> a.reduce((x,y) => x+y) val res24: Int = 55 scala> a.reduce(_ + _) val res25: Int = 55 

8、折叠 folt

def fold[A1 >: A](z: A1)(op: (A1, A1) => A1): A1
z:A1 是初始值
op:(A1,A1)=>A1 xx

讯享网scala> val a = List(1,2,3,4,5,6,7,8,9,10) val a: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> a.fold(0)((x,y) => x+y) val res28: Int = 55 scala> a.fold(100)((x,y) => x+y) val res29: Int = 155 scala> a.fold(0)(_+_) val res26: Int = 55 scala> scala> a.fold(100)(_+_) val res27: Int = 155 
小讯
上一篇 2025-03-11 13:27
下一篇 2025-03-03 15:22

相关推荐

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容,请联系我们,一经查实,本站将立刻删除。
如需转载请保留出处:https://51itzy.com/kjqy/56593.html